<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://iridia.ulb.ac.be/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mmontes</id>
	<title>IridiaWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://iridia.ulb.ac.be/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mmontes"/>
	<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/wiki/Special:Contributions/Mmontes"/>
	<updated>2026-04-15T11:09:30Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.4</generator>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Optimization_Group_Meetings&amp;diff=5502</id>
		<title>Optimization Group Meetings</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Optimization_Group_Meetings&amp;diff=5502"/>
		<updated>2010-05-20T09:40:56Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* History of Breakfast Meetings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Breakfast Meetings==&lt;br /&gt;
&lt;br /&gt;
===Purpose===&lt;br /&gt;
&lt;br /&gt;
Breakfast meetings are informal meetings where there is a general discussion or someone presents their work. And additional goal is to have breakfast together.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Organization ===&lt;br /&gt;
&lt;br /&gt;
# Send an email to the [[Lab_responsibilities#Seminars_and_meetings | responsible of Optimization Meetings]]. Mention your name, your affiliation, the date, time and room, and a short summary of what you are going to talk about.&lt;br /&gt;
# Add a new entry to the table below.&lt;br /&gt;
# Bring breakfast.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== History of Breakfast Meetings ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|- align=&amp;quot;left&amp;quot;&lt;br /&gt;
!width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
!width=&amp;quot;200&amp;quot;|Presenter&lt;br /&gt;
!class=&amp;quot;unsortable&amp;quot;|Summary&lt;br /&gt;
|-&lt;br /&gt;
|2010-05-20&lt;br /&gt;
|Marco Montes de Oca&lt;br /&gt;
| The social learning strategies tournament, organized as part of the&lt;br /&gt;
[http://www.intercult.su.se/cultaptation/ cultaptation project], and &lt;br /&gt;
its results were described. No ideas about algorithm portfolios were &lt;br /&gt;
discussed.&lt;br /&gt;
|-&lt;br /&gt;
|2010-05-??&lt;br /&gt;
|Renaud Lenne&lt;br /&gt;
| ???&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-04-29&lt;br /&gt;
|Franco Mascia&lt;br /&gt;
|The maximum clique problem, state of the art and ACO [http://dx.doi.org/10.1016/j.cor.2009.02.013] [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.80.2036&amp;amp;rep=rep1&amp;amp;type=pdf] [http://www710.univ-lyon1.fr/~csolnon/publications/rr-mai04.pdf]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-03-18&lt;br /&gt;
|Sabrina Oliveira&lt;br /&gt;
|Achieved results from the application of the Population Based Ant Colony Optimisation algorithm to TSP.&lt;br /&gt;
|-&lt;br /&gt;
|2010-02-11&lt;br /&gt;
|Sara Ceschia&lt;br /&gt;
|An overview of Container Loading Problems. &lt;br /&gt;
|-&lt;br /&gt;
| 2010-01-21&lt;br /&gt;
|Yuan, Zhi (Eric)&lt;br /&gt;
| Rice cooker vs. the idea of tuning. &lt;br /&gt;
|-&lt;br /&gt;
|2010-01-07&lt;br /&gt;
|Thomas StÃ¼ztle&lt;br /&gt;
|General discussion about the optimization group. &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Literature Sessions==&lt;br /&gt;
&lt;br /&gt;
===Purpose===&lt;br /&gt;
&lt;br /&gt;
The goal of the Literature Sessions is to examine and discuss&lt;br /&gt;
particularly interesting papers from the research literature. For each&lt;br /&gt;
session, one paper will be selected, there will be a short&lt;br /&gt;
presentation (10-15 minutes) about the contents, and a discussion will&lt;br /&gt;
ensue. Sessions will last around 40-45 minutes. Attendants should read&lt;br /&gt;
the paper before the session in order to have a productive discussion. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Organization ===&lt;br /&gt;
&lt;br /&gt;
# Book a room (ask Muriel).&lt;br /&gt;
# Send an email to the [[Lab_responsibilities#Seminars_and_meetings | responsible of Optimization Meetings]]. Mention your name, your affiliation, the date, time and room, a reference to the paper and the URL where it can be obtained. Also, attach the paper in PDF.&lt;br /&gt;
# Add a new entry to the table below. Please add a complete bibliographic reference (you may find it in the homepages of the authors) and a hyperlink to a PDF (links to http://dx.doi.org are preferred).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== History of Literature Sessions ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|- align=&amp;quot;left&amp;quot;&lt;br /&gt;
!width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
!width=&amp;quot;180&amp;quot;|Presenter&lt;br /&gt;
!class=&amp;quot;unsortable&amp;quot;|Paper discussed&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-05-07&lt;br /&gt;
|Zhi (Eric) Yuan&lt;br /&gt;
|Hydra: Automatically Configuring Algorithms for Portfolio-Based Selection. L. Xu, H.H. Hoos, K. Leyton-Brown. To appear at the Conference of the Association for the Advancement of Artificial Intelligence (AAAI-10), 2010. [http://ws.cs.ubc.ca/~kevinlb/pub.php?u=2010-AAAI-Hydra.pdf]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-03-04&lt;br /&gt;
|Sara Ceschia&lt;br /&gt;
|A. Schaerf and L. Di Gaspero. [[:Image:ElPaCo_presentation.pdf|Slides of the tutorial  &amp;quot;An Overview of Local Search Software Tools&amp;quot;]] given at &amp;quot;Learning and Intelligent OptimizatioN (LION 2007)&amp;quot;, December 8-12, 2007, Trento, Italy.&lt;br /&gt;
&lt;br /&gt;
L. Di Gaspero and A. Schaerf. EASY LOCAL++: An object-oriented framework for ï¬‚exible design of local search algorithms. Software â€” Practice &amp;amp; Experience, 33(8):733â€“765, July 2003. [http://www.diegm.uniud.it/satt/papers/DiSc03.pdf]&lt;br /&gt;
&lt;br /&gt;
S. Cahon, N. Melab and T. El Ghazali. ParadisEO: A Framework for the Reusable Design of Parallel and Distributed Metaheuristics. Journal of Heuristics, 10(3):357-380, November 2004. [http://dx.doi.org/10.1023/B:HEUR.0000026900.92269.ec]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-02-19&lt;br /&gt;
|Sabrina M. de Oliveira&lt;br /&gt;
|A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Session on Real Parameter Optimization. S. Garcia, D. Molina, M. Lozano, F. Herrera - Journal of Heuristics, Volume 15, pp. 617-644, 2009. [http://sci2s.ugr.es/programacion/workshop/GarciaMolinaLozanoHerrera-JH2008.pdf]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-02-04&lt;br /&gt;
|Thomas StÃ¼ztle&lt;br /&gt;
|Analyzing Bandit-based Adaptive Operator Selection Mechanisms. Ãlvaro Fialho, Luis Da Costa, Marc Schoenauer and MichÃ©le Sebag. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-01-14&lt;br /&gt;
|JÃ©rÃ©mie Dubois-Lacoste&lt;br /&gt;
|SATzilla: Portfolio-based Algorithm Selection for SAT. L. Xu, F. Hutter, H. H. Hoos, K. Leyton-Brown - Journal of Artificial Intelligence Research, Volume 32, pp. 565-606, 2008. [http://www.jair.org/media/2490/live-2490-3923-jair.pdf]&lt;br /&gt;
|-&lt;br /&gt;
|2009-12-11&lt;br /&gt;
|[http://iridia.ulb.ac.be/~manuel Manuel LÃ³pez-IbÃ¡Ã±ez]&lt;br /&gt;
|SATenstein: Automatically Building Local Search SAT Solvers From Components. Ashiqur R. KhudaBukhsh, Lin Xu, Holger H. Hoos and Kevin Leyton-Brown - Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI-09), pp. 517-524, 2009. [http://www.cs.ubc.ca/labs/beta/Projects/SATenstein/SATenstein_ijcai.pdf]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other Previous Meetings==&lt;br /&gt;
&lt;br /&gt;
See [[Previous Optimization meetings | Minutes and agendas from previous meetings]].&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Optimization_Group_Meetings&amp;diff=5501</id>
		<title>Optimization Group Meetings</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Optimization_Group_Meetings&amp;diff=5501"/>
		<updated>2010-05-20T09:40:33Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* History of Breakfast Meetings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Breakfast Meetings==&lt;br /&gt;
&lt;br /&gt;
===Purpose===&lt;br /&gt;
&lt;br /&gt;
Breakfast meetings are informal meetings where there is a general discussion or someone presents their work. And additional goal is to have breakfast together.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Organization ===&lt;br /&gt;
&lt;br /&gt;
# Send an email to the [[Lab_responsibilities#Seminars_and_meetings | responsible of Optimization Meetings]]. Mention your name, your affiliation, the date, time and room, and a short summary of what you are going to talk about.&lt;br /&gt;
# Add a new entry to the table below.&lt;br /&gt;
# Bring breakfast.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== History of Breakfast Meetings ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|- align=&amp;quot;left&amp;quot;&lt;br /&gt;
!width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
!width=&amp;quot;200&amp;quot;|Presenter&lt;br /&gt;
!class=&amp;quot;unsortable&amp;quot;|Summary&lt;br /&gt;
|-&lt;br /&gt;
|2010-05-20&lt;br /&gt;
|Marco Montes&lt;br /&gt;
| The social learning strategies tournament, organized as part of the&lt;br /&gt;
[http://www.intercult.su.se/cultaptation/ cultaptation project], and &lt;br /&gt;
its results were described. No ideas about algorithm portfolios were &lt;br /&gt;
discussed.&lt;br /&gt;
|-&lt;br /&gt;
|2010-05-??&lt;br /&gt;
|Renaud Lenne&lt;br /&gt;
| ???&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-04-29&lt;br /&gt;
|Franco Mascia&lt;br /&gt;
|The maximum clique problem, state of the art and ACO [http://dx.doi.org/10.1016/j.cor.2009.02.013] [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.80.2036&amp;amp;rep=rep1&amp;amp;type=pdf] [http://www710.univ-lyon1.fr/~csolnon/publications/rr-mai04.pdf]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-03-18&lt;br /&gt;
|Sabrina Oliveira&lt;br /&gt;
|Achieved results from the application of the Population Based Ant Colony Optimisation algorithm to TSP.&lt;br /&gt;
|-&lt;br /&gt;
|2010-02-11&lt;br /&gt;
|Sara Ceschia&lt;br /&gt;
|An overview of Container Loading Problems. &lt;br /&gt;
|-&lt;br /&gt;
| 2010-01-21&lt;br /&gt;
|Yuan, Zhi (Eric)&lt;br /&gt;
| Rice cooker vs. the idea of tuning. &lt;br /&gt;
|-&lt;br /&gt;
|2010-01-07&lt;br /&gt;
|Thomas StÃ¼ztle&lt;br /&gt;
|General discussion about the optimization group. &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Literature Sessions==&lt;br /&gt;
&lt;br /&gt;
===Purpose===&lt;br /&gt;
&lt;br /&gt;
The goal of the Literature Sessions is to examine and discuss&lt;br /&gt;
particularly interesting papers from the research literature. For each&lt;br /&gt;
session, one paper will be selected, there will be a short&lt;br /&gt;
presentation (10-15 minutes) about the contents, and a discussion will&lt;br /&gt;
ensue. Sessions will last around 40-45 minutes. Attendants should read&lt;br /&gt;
the paper before the session in order to have a productive discussion. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Organization ===&lt;br /&gt;
&lt;br /&gt;
# Book a room (ask Muriel).&lt;br /&gt;
# Send an email to the [[Lab_responsibilities#Seminars_and_meetings | responsible of Optimization Meetings]]. Mention your name, your affiliation, the date, time and room, a reference to the paper and the URL where it can be obtained. Also, attach the paper in PDF.&lt;br /&gt;
# Add a new entry to the table below. Please add a complete bibliographic reference (you may find it in the homepages of the authors) and a hyperlink to a PDF (links to http://dx.doi.org are preferred).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== History of Literature Sessions ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;sortable&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|- align=&amp;quot;left&amp;quot;&lt;br /&gt;
!width=&amp;quot;100&amp;quot;|Date&lt;br /&gt;
!width=&amp;quot;180&amp;quot;|Presenter&lt;br /&gt;
!class=&amp;quot;unsortable&amp;quot;|Paper discussed&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-05-07&lt;br /&gt;
|Zhi (Eric) Yuan&lt;br /&gt;
|Hydra: Automatically Configuring Algorithms for Portfolio-Based Selection. L. Xu, H.H. Hoos, K. Leyton-Brown. To appear at the Conference of the Association for the Advancement of Artificial Intelligence (AAAI-10), 2010. [http://ws.cs.ubc.ca/~kevinlb/pub.php?u=2010-AAAI-Hydra.pdf]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-03-04&lt;br /&gt;
|Sara Ceschia&lt;br /&gt;
|A. Schaerf and L. Di Gaspero. [[:Image:ElPaCo_presentation.pdf|Slides of the tutorial  &amp;quot;An Overview of Local Search Software Tools&amp;quot;]] given at &amp;quot;Learning and Intelligent OptimizatioN (LION 2007)&amp;quot;, December 8-12, 2007, Trento, Italy.&lt;br /&gt;
&lt;br /&gt;
L. Di Gaspero and A. Schaerf. EASY LOCAL++: An object-oriented framework for ï¬‚exible design of local search algorithms. Software â€” Practice &amp;amp; Experience, 33(8):733â€“765, July 2003. [http://www.diegm.uniud.it/satt/papers/DiSc03.pdf]&lt;br /&gt;
&lt;br /&gt;
S. Cahon, N. Melab and T. El Ghazali. ParadisEO: A Framework for the Reusable Design of Parallel and Distributed Metaheuristics. Journal of Heuristics, 10(3):357-380, November 2004. [http://dx.doi.org/10.1023/B:HEUR.0000026900.92269.ec]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-02-19&lt;br /&gt;
|Sabrina M. de Oliveira&lt;br /&gt;
|A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Session on Real Parameter Optimization. S. Garcia, D. Molina, M. Lozano, F. Herrera - Journal of Heuristics, Volume 15, pp. 617-644, 2009. [http://sci2s.ugr.es/programacion/workshop/GarciaMolinaLozanoHerrera-JH2008.pdf]&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-02-04&lt;br /&gt;
|Thomas StÃ¼ztle&lt;br /&gt;
|Analyzing Bandit-based Adaptive Operator Selection Mechanisms. Ãlvaro Fialho, Luis Da Costa, Marc Schoenauer and MichÃ©le Sebag. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|2010-01-14&lt;br /&gt;
|JÃ©rÃ©mie Dubois-Lacoste&lt;br /&gt;
|SATzilla: Portfolio-based Algorithm Selection for SAT. L. Xu, F. Hutter, H. H. Hoos, K. Leyton-Brown - Journal of Artificial Intelligence Research, Volume 32, pp. 565-606, 2008. [http://www.jair.org/media/2490/live-2490-3923-jair.pdf]&lt;br /&gt;
|-&lt;br /&gt;
|2009-12-11&lt;br /&gt;
|[http://iridia.ulb.ac.be/~manuel Manuel LÃ³pez-IbÃ¡Ã±ez]&lt;br /&gt;
|SATenstein: Automatically Building Local Search SAT Solvers From Components. Ashiqur R. KhudaBukhsh, Lin Xu, Holger H. Hoos and Kevin Leyton-Brown - Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI-09), pp. 517-524, 2009. [http://www.cs.ubc.ca/labs/beta/Projects/SATenstein/SATenstein_ijcai.pdf]&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other Previous Meetings==&lt;br /&gt;
&lt;br /&gt;
See [[Previous Optimization meetings | Minutes and agendas from previous meetings]].&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4939</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4939"/>
		<updated>2008-11-07T16:51:43Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research, in particular the dynamic theory of social impact (Nowak, Szamrej &amp;amp; LatanÃ©, 1990), was another source of inspiration in the development of the first particle swarm optimization algorithm (Kennedy, 2006). The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995).&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually initialized within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt; but they can also be initialized to zero or to small random values to prevent them leaving the search space during the first iterations. During the main loop of the algorithm, the particles' velocities and positions are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Swarm Intelligence. In ''Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies''. A. Y. Zomaya (Ed.) , pages 187-219, Springer US, Secaucus, NJ, 2006.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
A. Nowak, J. Szamrej, and B. LatanÃ©. From Private Attitude to Public Opinion: A Dynamic Theory of Social Impact. ''Psychological Review'', 97(3):362-376, 1990.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4938</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4938"/>
		<updated>2008-11-07T16:44:23Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research, in particular the dynamic theory of social impact (Nowak, Szamrej &amp;amp; LatanÃ©, 1990), was another source of inspiration in the development of the first particle swarm optimization algorithm (Kennedy, 2006). The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995).&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually initialized within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt; but they can also be initialized to zero or to small random values to prevent particles to leave the search space during the first iterations. During the main loop of the algorithm, the particles' velocities and positions are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Swarm Intelligence. In ''Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies''. A. Y. Zomaya (Ed.) , pages 187-219, Springer US, Secaucus, NJ, 2006.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
A. Nowak, J. Szamrej, and B. LatanÃ©. From Private Attitude to Public Opinion: A Dynamic Theory of Social Impact. ''Psychological Review'', 97(3):362-376, 1990.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4935</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4935"/>
		<updated>2008-11-07T16:33:39Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research, in particular the dynamic theory of social impact (Nowak, Szamrej &amp;amp; LatanÃ©, 1990), was another source of inspiration in the development of the first particle swarm optimization algorithm (Kennedy, 2006). The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995).&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Swarm Intelligence. In ''Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies''. A. Y. Zomaya (Ed.) , pages 187-219, Springer US, Secaucus, NJ, 2006.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
A. Nowak, J. Szamrej, and B. LatanÃ©. From Private Attitude to Public Opinion: A Dynamic Theory of Social Impact. ''Psychological Review'', 97(3):362-376, 1990.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4934</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4934"/>
		<updated>2008-11-07T16:09:20Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* The algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research, in particular the dynamic theory of social impact (Nowak, Szamrej &amp;amp; LatanÃ©, 1990), was another source of inspiration in the development of the first particle swarm optimization algorithm (Kennedy, 2006). The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995).&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space $\Theta$. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside $\Theta$ are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Swarm Intelligence. In ''Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies''. A. Y. Zomaya (Ed.) , pages 187-219, Springer US, Secaucus, NJ, 2006.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
A. Nowak, J. Szamrej, and B. LatanÃ©. From Private Attitude to Public Opinion: A Dynamic Theory of Social Impact. ''Psychological Review'', 97(3):362-376, 1990.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4933</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4933"/>
		<updated>2008-11-07T15:20:08Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research, in particular the dynamic theory of social impact (Nowak, Szamrej &amp;amp; LatanÃ©, 1990), was another source of inspiration in the development of the first particle swarm optimization algorithm (Kennedy, 2006). The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995).&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space $\Theta$. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside $\Theta$ are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Swarm Intelligence. In ''Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies''. A. Y. Zomaya (Ed.) , pages 187-219, Springer US, Secaucus, NJ, 2006.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
A. Nowak, J. Szamrej, and B. LatanÃ©. From Private Attitude to Public Opinion: A Dynamic Theory of Social Impact. ''Psychological Review'', 97(3):362-376, 1990.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4932</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4932"/>
		<updated>2008-11-07T15:18:43Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research, in particular the dynamic theory of social impact (Nowak, Szamrej &amp;amp; LatanÃ©, 1990; Kennedy, 2006), was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995).&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space $\Theta$. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside $\Theta$ are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Swarm Intelligence. In ''Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies''. A. Y. Zomaya (Ed.) , pages 187-219, Springer US, Secaucus, NJ, 2006.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
A. Nowak, J. Szamrej, and B. LatanÃ©. From Private Attitude to Public Opinion: A Dynamic Theory of Social Impact. ''Psychological Review'', 97(3):362-376, 1990.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4931</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4931"/>
		<updated>2008-11-07T15:14:17Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research, in particular the dynamic theory of social impact (Nowak, Szamrej &amp;amp; LatanÃ©, 1990; Kennedy, 2006), was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995).&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space $\Theta$. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside $\Theta$ are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Swarm Intelligence. In ''Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies'' Zomaya, Albert Y. (Ed.) , pages 187-219, Springer US, Secaucus, NJ, 2006.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4930</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4930"/>
		<updated>2008-11-07T15:07:00Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research, in particular the dynamic theory of social impact (Nowak, Szamrej &amp;amp; LatanÃ©, 1990; Kennedy, 2006), was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995).&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space $\Theta$. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside $\Theta$ are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4929</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4929"/>
		<updated>2008-11-07T15:02:08Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* History */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research, in particular LatanÃ©'s social impact theory (Nowak, Szamrej &amp;amp; LatanÃ©, 1990; Kennedy, 2006), was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995).&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space $\Theta$. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside $\Theta$ are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4928</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4928"/>
		<updated>2008-11-07T14:23:00Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* Bare-bones PSO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space $\Theta$. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside $\Theta$ are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu_{ij}^{t} ,\sigma_{ij}^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu_{ij}^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma_{ij}^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4927</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4927"/>
		<updated>2008-11-07T14:19:17Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* The algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space $\Theta$. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside $\Theta$ are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4926</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4926"/>
		<updated>2008-11-07T14:15:24Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local, simple&lt;br /&gt;
behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
In some cases, particles can be attracted to regions outside the feasible search space $\Theta$. For this reason, mechanisms for preserving solution feasibility and a proper swarm operation have been devised (Engelbrecht 2005). One of the least disruptive mechanisms for handling constraints is one in which particles going outside $\Theta$ are not allowed to improve their personal best position so that they are attracted back to the feasible space in subsequent iterations.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4925</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4925"/>
		<updated>2008-11-07T13:50:54Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local, simple&lt;br /&gt;
behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4924</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4924"/>
		<updated>2008-11-07T13:49:02Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local, simple&lt;br /&gt;
behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** [http://www.springer.com/11721 Swarm Intelligence] (the main journal reporting on swarm intelligence research) regularly publishes articles on PSO. Other journals also publish articles about PSO. These include the IEEE Transactions series, [http://www.elsevier.com/locate/asoc/ Applied Soft Computing], [http://www.springer.com/computer/foundations/journal/11047 Natural Computing], [http://www.springer.com/engineering/journal/158 Structural and Multidisciplinary Optimization], and others.&lt;br /&gt;
&lt;br /&gt;
The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4923</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4923"/>
		<updated>2008-11-07T13:29:13Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:PSOTopologies-9.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local, simple&lt;br /&gt;
behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=File:PSOTopologies-9.png&amp;diff=4922</id>
		<title>File:PSOTopologies-9.png</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=File:PSOTopologies-9.png&amp;diff=4922"/>
		<updated>2008-11-07T13:24:55Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: Depiction of some commonly used population topologies&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Depiction of some commonly used population topologies&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4921</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4921"/>
		<updated>2008-11-07T13:14:35Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology'' (Figure 1).&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterize the local, simple&lt;br /&gt;
behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or a small random value&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;math&amp;gt;^1&amp;lt;/math&amp;gt;Without loss of generality, the presentation considers only minimization problems.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems--A technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Swarm Intelligence]], [[Ant Colony Optimization]], [[Optimization]], [[Stochastic Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4919</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4919"/>
		<updated>2008-10-22T16:58:20Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* Bare bones PSO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterizes the local, simple&lt;br /&gt;
behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or small random values&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare-bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4918</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4918"/>
		<updated>2008-10-22T16:55:29Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* The algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterizes the local, simple&lt;br /&gt;
behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;, the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or small random values&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4917</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4917"/>
		<updated>2008-10-22T16:54:15Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* The algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterizes the local, simple&lt;br /&gt;
behaviors that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social component'' &lt;br /&gt;
quantifies the performance of a particle relative to its&lt;br /&gt;
neighbors. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or small random values&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4916</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4916"/>
		<updated>2008-10-22T16:52:41Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* The algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterizes the local, simple&lt;br /&gt;
behaviours that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social&lt;br /&gt;
component'' quantifies the performance of a particle relative to its&lt;br /&gt;
neighbours. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or small random values&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4915</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4915"/>
		<updated>2008-10-22T16:51:55Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* The algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;i,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterizes the local, simple&lt;br /&gt;
behaviours that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social&lt;br /&gt;
component'' quantifies the performance of a particle relative to its&lt;br /&gt;
neighbours. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or small random values&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4914</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4914"/>
		<updated>2008-10-22T16:50:41Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;i,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;.If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterizes the local, simple&lt;br /&gt;
behaviours that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social&lt;br /&gt;
component'' quantifies the performance of a particle relative to its&lt;br /&gt;
neighbours. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or small random values&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4913</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4913"/>
		<updated>2008-10-22T16:23:44Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based&lt;br /&gt;
stochastic approach for solving continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm&lt;br /&gt;
optimization can be traced back to the work of Reeves (1983), who proposed&lt;br /&gt;
particle systems to model objects that are dynamic and cannot be easily&lt;br /&gt;
represented by polygons or surfaces. Examples of such objects are fire, smoke,&lt;br /&gt;
water and clouds. In these models, particles are independent of each other and&lt;br /&gt;
their movement is governed by a set of rules. Some years later, Reynolds&lt;br /&gt;
(1987) used a particle system to simulate the collective behavior of a flock&lt;br /&gt;
of birds. In a similar kind of simulation, Heppner and Grenander (1990)&lt;br /&gt;
included a ''roost'' that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}}&lt;br /&gt;
\, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*)&lt;br /&gt;
\leq f(\vec{\theta}), \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles&lt;br /&gt;
&amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position&lt;br /&gt;
represents a candidate solution of the considered optimization problem&lt;br /&gt;
represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step&lt;br /&gt;
&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best&lt;br /&gt;
position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has&lt;br /&gt;
ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best'').&lt;br /&gt;
Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its&lt;br /&gt;
''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the&lt;br /&gt;
standard particle swarm optimization algorithm, the particles' neighborhood&lt;br /&gt;
relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where&lt;br /&gt;
each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each&lt;br /&gt;
edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of&lt;br /&gt;
particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. Velocities are usually&lt;br /&gt;
initialized to zero, but can be initialized to small random values. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'',&lt;br /&gt;
&amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called&lt;br /&gt;
''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices&lt;br /&gt;
in which the entries in the main diagonal are distributed in the interval&lt;br /&gt;
&amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices&lt;br /&gt;
are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq&lt;br /&gt;
\vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Usually, vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt;i,&lt;br /&gt;
referred to as the ''neighborhood best,''  is the best position ever found by&lt;br /&gt;
any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is,&lt;br /&gt;
&amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;. Alternatively, the neighborhood best can be selected as&lt;br /&gt;
the current best particle, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{x}^{\,t}_j) \,\,\, \forall p_j \in&lt;br /&gt;
\mathcal{N}_i&amp;lt;/math&amp;gt;.If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
The three terms in the velocity update rule characterizes the local, simple&lt;br /&gt;
behaviours that particles follow. The first term, called the ''inertia'' or&lt;br /&gt;
''momentum'' serves as a memory of the previous flight direction, preventing&lt;br /&gt;
the particle from drastically changing direction. The second term, called the&lt;br /&gt;
''cognitive component'' resembles the tendency of particles to return to&lt;br /&gt;
previously found best positions. The third term, called the ''social&lt;br /&gt;
component'' quantifies the performance of a particle relative to its&lt;br /&gt;
neighbours. It represents a group norm or standard that should be attained.&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the number of particles &amp;lt;math&amp;gt;|\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; to zero or small random values&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The algorithm above follows synchronous updates of particle positions and best&lt;br /&gt;
positions, where the best position found is updated only after all particle&lt;br /&gt;
positions and personal best positions have been updated. In asynchronous&lt;br /&gt;
update mode, the best position found is updated immediately after each&lt;br /&gt;
particle's position update. Asynchronous updates have a faster propagation of&lt;br /&gt;
best solutions through the swarm.&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Constriction Coefficient ===&lt;br /&gt;
&lt;br /&gt;
The ''constriction coefficient'' was introduced as an outcome of a theoretical&lt;br /&gt;
analysis of swarm dynamics (Clerc and Kennedy 2002). Velocities&lt;br /&gt;
are constricted, with the following change in the velocity update:&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = \chi^t[\vec{v}^{\,t}_i +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) +&lt;br /&gt;
\varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)]&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\chi^t&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrix in&lt;br /&gt;
which the entries in the main diagonal are calculated as&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\chi^t_{jj}=\frac{2\kappa}{|2-\varphi^t_{jj}-\sqrt{\varphi^t_{jj}(\varphi^t_{jj}-2)}|}&amp;lt;/math&amp;gt;&lt;br /&gt;
with &amp;lt;math&amp;gt;\varphi^t_{jj}=\varphi_1U^t_{1,jj}+\varphi_2U^t_{2,jj}&amp;lt;/math&amp;gt;. Convergence is guaranteed under&lt;br /&gt;
the conditions that &amp;lt;math&amp;gt;\varphi^t_{jj}\ge 4\,\forall j&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\kappa\in&lt;br /&gt;
[0,1]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare-bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural&lt;br /&gt;
network training and was published together with the algorithm itself (Kennedy&lt;br /&gt;
and Eberhart 1995). Many more areas of application have been explored ever&lt;br /&gt;
since, including telecommunications, control, data mining, design,&lt;br /&gt;
combinatorial optimization, power systems, signal processing, and many others.&lt;br /&gt;
To date, there are hundreds of publications reporting applications of particle&lt;br /&gt;
swarm optimization algorithms. For a review, see (Poli 2008). Although PSO has&lt;br /&gt;
been used mainly to solve unconstrained, single-objective optimization problems, PSO algorithms&lt;br /&gt;
have been developed to solve constrained problems, multi-objective&lt;br /&gt;
optimization problems, problems with dynamically changing landscapes, and to&lt;br /&gt;
find multiple solutions. For a review, see (Engelbrecht 2005).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and&lt;br /&gt;
convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird&lt;br /&gt;
flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm&lt;br /&gt;
algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm:&lt;br /&gt;
simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm&lt;br /&gt;
optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An&lt;br /&gt;
overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy&lt;br /&gt;
objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4912</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4912"/>
		<updated>2008-10-13T08:22:57Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* External Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'', &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
** [http://www.computelligence.org/sis ''The IEEE Swarm Intelligence Symposia''], started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4911</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4911"/>
		<updated>2008-10-13T08:20:58Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'', &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4910</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4910"/>
		<updated>2008-10-13T08:20:02Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* Applications of PSO and Current Trends */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'', &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems (e.g., multiobjective)&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4909</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4909"/>
		<updated>2008-10-13T08:17:51Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* The algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'', &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4908</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4908"/>
		<updated>2008-10-08T15:00:11Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed of a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4907</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4907"/>
		<updated>2008-10-08T14:59:27Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed by a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, &lt;br /&gt;
 the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4906</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4906"/>
		<updated>2008-10-08T14:51:08Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed by a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - International Conference on Swarm Intelligence''], started in 1998.&lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4905</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4905"/>
		<updated>2008-10-08T14:46:15Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed by a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4904</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4904"/>
		<updated>2008-10-08T14:45:28Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed by a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;x_{ij}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the position vector of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4903</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4903"/>
		<updated>2008-10-08T14:42:25Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed by a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\underset{{\vec{b}^{\,t}}_j \in \Theta \,|\, p_j \in \mathcal{N}_i}{\operatorname{arg\,min}} \, f({\vec{b}^{\,t}}_j)&amp;lt;/math&amp;gt; &lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4902</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4902"/>
		<updated>2008-10-08T14:30:52Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed by a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The PSO algorithm starts with the random generation of the particles' positions and velocities within an initialization region &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, it is guaranteed that the particles' velocities do not grow to infinity (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4901</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4901"/>
		<updated>2008-10-08T14:26:33Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt; (self-links are not drawn for simplicity) . The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed by a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random generation of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4900</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4900"/>
		<updated>2008-10-08T14:24:38Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'', move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
In PSO, the so-called ''swarm'' is composed by a set of particles &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt;. A particle's position represents a candidate solution of the considered optimization problem represented by an objective function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it.  The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; (with respect to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;) has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random generation of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4899</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4899"/>
		<updated>2008-10-02T15:55:58Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* The algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt; (particles can belong to its own neighborhood). In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random generation of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices in which the entries in the main diagonal are distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. At every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) &amp;lt; f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4898</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4898"/>
		<updated>2008-10-02T15:53:29Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* The algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt; (particles can belong to its own neighborhood). In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random generation of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4897</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4897"/>
		<updated>2008-10-02T15:52:41Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* Preliminaries */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt; (particles can belong to its own neighborhood). In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''.&lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random initialization of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4896</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4896"/>
		<updated>2008-10-02T15:49:34Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'' &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. Note that a particle can belong to its own neighborhood. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''. &lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random initialization of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4895</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4895"/>
		<updated>2008-10-02T15:47:46Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). &lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'', which is defined as the set &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. Note that a particle can belong to its own neighborhood. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''. &lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random initialization of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4894</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4894"/>
		<updated>2008-10-02T15:36:45Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* External Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). The elements of the set &amp;lt;math&amp;gt;\Theta^*&amp;lt;/math&amp;gt; are equivalent with respect to the function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'', which is defined as the set &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. Note that a particle can belong to its own neighborhood. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''. &lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random initialization of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Papers on PSO are published regularly in many journals and conferences:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals also publish articles about PSO. These include the IEEE Transactions series, Natural Computing, Structural and Multidisciplinary Optimization, Soft Computing and others.&lt;br /&gt;
** [http://iridia.ulb.ac.be/~ants ''ANTS - From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms'']. This biannual series of workshops, held for the first time in 1998, is the oldest conference in the ACO and swarm intelligence fields. &lt;br /&gt;
**The IEEE Swarm Intelligence Symposia, started in 2003.&lt;br /&gt;
**  Special sessions or special tracks on PSO are organized in many conferences. Examples are the IEEE Congress on Evolutionary Computation (CEC) and the Genetic and Evolutionary Computation (GECCO) series of conferences.&lt;br /&gt;
** Papers on PSO are also published in the proceedings of many other conferences such as Parallel Problem Solving from Nature conferences, the European Workshops on the Applications of Evolutionary Computation and many others.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4893</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4893"/>
		<updated>2008-10-02T13:21:12Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* Applications of PSO and Current Trends */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). The elements of the set &amp;lt;math&amp;gt;\Theta^*&amp;lt;/math&amp;gt; are equivalent with respect to the function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'', which is defined as the set &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. Note that a particle can belong to its own neighborhood. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''. &lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random initialization of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued, including:&lt;br /&gt;
*Theoretical aspects&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to more and/or different kind of problems&lt;br /&gt;
*Parameter selection &lt;br /&gt;
*Comparisons between PSO variants and other algorithms&lt;br /&gt;
*New variants&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Many journals and conferences publish papers on PSO:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals where papers on PSO regularly appear are IEEE Transactions on Evolutionary Computation, etc.&lt;br /&gt;
** GECCO, ANTS, IEEE SIS, Evo\*&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4892</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4892"/>
		<updated>2008-10-02T13:18:44Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). The elements of the set &amp;lt;math&amp;gt;\Theta^*&amp;lt;/math&amp;gt; are equivalent with respect to the function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'', which is defined as the set &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. Note that a particle can belong to its own neighborhood. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''. &lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random initialization of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
A number of research directions are currently pursued:&lt;br /&gt;
*Theoretical aspects (particles behavior, stagnation&lt;br /&gt;
*Matching algorithms (or algorithmic components) to problems&lt;br /&gt;
*Application to different kind of problems (dynamic, stochastic,&lt;br /&gt;
combinatorial)&lt;br /&gt;
*Parameter selection. (How many particles, which topology?)&lt;br /&gt;
*Identification of âstate-of-the-artâ PSO algorithms&lt;br /&gt;
(comparisons)&lt;br /&gt;
*New variants (modifications, hybridizations)&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Many journals and conferences publish papers on PSO:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals where papers on PSO regularly appear are IEEE Transactions on Evolutionary Computation, etc.&lt;br /&gt;
** GECCO, ANTS, IEEE SIS, Evo\*&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4891</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4891"/>
		<updated>2008-10-02T13:16:49Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). The elements of the set &amp;lt;math&amp;gt;\Theta^*&amp;lt;/math&amp;gt; are equivalent with respect to the function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'', which is defined as the set &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. Note that a particle can belong to its own neighborhood. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''. &lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random initialization of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Research Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
== Current Research Issues ==&lt;br /&gt;
&lt;br /&gt;
Current research in particle swarm optimization is done along several lines. Some of them are: &lt;br /&gt;
&lt;br /&gt;
\begin{description}&lt;br /&gt;
	\item[Theory.] Understanding how particle swarm optimizers work, from a theoretical point of view, has been the subject of active &lt;br /&gt;
	research in the last years. The first efforts were directed toward understanding the effects of the different parameters in the &lt;br /&gt;
	behavior of the standard particle swarm optimization algorithm~\cite{Ozcan99,Clerc02,Trelea03}. In recent years there &lt;br /&gt;
	has been a particular interest in studying the stochastic properties of the pair of stochastic equations that govern the movement of a particle~\cite{Blackwell07,Poli08b,Pena08}.&lt;br /&gt;
&lt;br /&gt;
		&lt;br /&gt;
	\item[New variants.] This line of research has been the most active since the proposal of the first particle swarm algorithm. &lt;br /&gt;
	New particle position-update mechanisms are continuously proposed in an effort to design ever better performing particle swarms. &lt;br /&gt;
	Some efforts are directed toward understanding on which classes of problems particular particle swarm algorithms can be top &lt;br /&gt;
	performers~\cite{Poli05a,Poli05b,Langdon07}.  Also of interest is the study of hybridizations between particle swarms and other &lt;br /&gt;
	high-performing optimization algorithms~\cite{Lovbjerg01,Naka03}. Recently, some parallel implementations have been studied~\cite{Schutte04,Koh06}&lt;br /&gt;
	and due to the wider availability of parallel computers, we should expect more research to be done along this line.  &lt;br /&gt;
	&lt;br /&gt;
	\item[Performance evaluation.] Due to the great number of new variants that are frequently proposed, performance comparisons have been always&lt;br /&gt;
	of interest. Comparisons between the standard particle swarm and other optimization techniques, like the ones in ~\cite{Eberhart98,Hassan05}, &lt;br /&gt;
	have triggered the interest of many researchers on particle swarm optimization. Comparisons between different particle &lt;br /&gt;
	swarm optimization variants or empirical parameter settings studies help in improving our understanding of the technique and &lt;br /&gt;
	many times trigger empirical and theoretical work~\cite{Mendes04,Schutte05,MdeO06}. &lt;br /&gt;
		&lt;br /&gt;
	\item[Applications.] It is expected that in the future more applications of the particle swarm optimization algorithm will be considered.&lt;br /&gt;
	Much of the work done on other aspects of the paradigm will hopefully allow us to solve practically relevant problems in many domains. &lt;br /&gt;
		&lt;br /&gt;
\end{description}&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Many journals and conferences publish papers on PSO:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals where papers on PSO regularly appear are IEEE Transactions on Evolutionary Computation, etc.&lt;br /&gt;
** GECCO, ANTS, IEEE SIS, Evo\*&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4890</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4890"/>
		<updated>2008-10-02T13:02:11Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* Applications of PSO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). The elements of the set &amp;lt;math&amp;gt;\Theta^*&amp;lt;/math&amp;gt; are equivalent with respect to the function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'', which is defined as the set &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. Note that a particle can belong to its own neighborhood. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''. &lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random initialization of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other approaches to tackle discrete problems include transforming the continuous domain into discrete sets of intervals (Fukuyama et al. 1999), rounding off the components of the particles' position vectors (Laskari et al. 2002), and redefining the mathematical operators used in the velocity- and position-update rules to suit a chosen problem representation (Clerc 2004).&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although a normal distribution was originally used in the bare bones model, other probability density functions can make it more competitive (Richer and Blackwell 2006).&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO and Current Research Trends==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
== Current Research Issues ==&lt;br /&gt;
&lt;br /&gt;
Current research in particle swarm optimization is done along several lines. Some of them are: &lt;br /&gt;
&lt;br /&gt;
\begin{description}&lt;br /&gt;
	\item[Theory.] Understanding how particle swarm optimizers work, from a theoretical point of view, has been the subject of active &lt;br /&gt;
	research in the last years. The first efforts were directed toward understanding the effects of the different parameters in the &lt;br /&gt;
	behavior of the standard particle swarm optimization algorithm~\cite{Ozcan99,Clerc02,Trelea03}. In recent years there &lt;br /&gt;
	has been a particular interest in studying the stochastic properties of the pair of stochastic equations that govern the movement of a particle~\cite{Blackwell07,Poli08b,Pena08}.&lt;br /&gt;
&lt;br /&gt;
		&lt;br /&gt;
	\item[New variants.] This line of research has been the most active since the proposal of the first particle swarm algorithm. &lt;br /&gt;
	New particle position-update mechanisms are continuously proposed in an effort to design ever better performing particle swarms. &lt;br /&gt;
	Some efforts are directed toward understanding on which classes of problems particular particle swarm algorithms can be top &lt;br /&gt;
	performers~\cite{Poli05a,Poli05b,Langdon07}.  Also of interest is the study of hybridizations between particle swarms and other &lt;br /&gt;
	high-performing optimization algorithms~\cite{Lovbjerg01,Naka03}. Recently, some parallel implementations have been studied~\cite{Schutte04,Koh06}&lt;br /&gt;
	and due to the wider availability of parallel computers, we should expect more research to be done along this line.  &lt;br /&gt;
	&lt;br /&gt;
	\item[Performance evaluation.] Due to the great number of new variants that are frequently proposed, performance comparisons have been always&lt;br /&gt;
	of interest. Comparisons between the standard particle swarm and other optimization techniques, like the ones in ~\cite{Eberhart98,Hassan05}, &lt;br /&gt;
	have triggered the interest of many researchers on particle swarm optimization. Comparisons between different particle &lt;br /&gt;
	swarm optimization variants or empirical parameter settings studies help in improving our understanding of the technique and &lt;br /&gt;
	many times trigger empirical and theoretical work~\cite{Mendes04,Schutte05,MdeO06}. &lt;br /&gt;
		&lt;br /&gt;
	\item[Applications.] It is expected that in the future more applications of the particle swarm optimization algorithm will be considered.&lt;br /&gt;
	Much of the work done on other aspects of the paradigm will hopefully allow us to solve practically relevant problems in many domains. &lt;br /&gt;
		&lt;br /&gt;
\end{description}&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. Discrete particle swarm optimization, illustrated by the traveling salesman problem. In ''New Optimization Techniques in Engineering'', pages 219-239. Springer, Berlin, Germany, 2004.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
Y. Fukuyama, S. Takayama, Y. Nakanishi, and H. Yoshida. A particle swarm optimization for reactive power and voltage control in electric power systems. In ''Proceedings of the Genetic and Evolutionary Computation Conference'', pages 1523-1528, Morgan Kaufmann, San Francisco,CA, 1999.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
E. C. Laskari, K. E. Parsopoulos, and M. N. Vrahatis. Particle swarm optimization for integer programming. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 1582-1587, IEEE Press, Piscataway, NJ, 2002.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
T. J. Richer and T. M. Blackwell. The LÃ©vy Particle Swarm. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 808-815, IEEE Press, Piscataway, NJ, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Many journals and conferences publish papers on PSO:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals where papers on PSO regularly appear are IEEE Transactions on Evolutionary Computation, etc.&lt;br /&gt;
** GECCO, ANTS, IEEE SIS, Evo\*&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
	<entry>
		<id>https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4889</id>
		<title>Particle Swarm Optimization - Scholarpedia Draft</title>
		<link rel="alternate" type="text/html" href="https://iridia.ulb.ac.be/w/index.php?title=Particle_Swarm_Optimization_-_Scholarpedia_Draft&amp;diff=4889"/>
		<updated>2008-10-02T13:01:05Z</updated>

		<summary type="html">&lt;p&gt;Mmontes: /* Bare bones PSO */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Particle swarm optimization&amp;lt;/strong&amp;gt; (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems. &lt;br /&gt;
&lt;br /&gt;
In particle swarm optimization, simple software agents, called ''particles'' move in the solution space of an optimization problem. The position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking. &lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization belongs to the class of [[swarm intelligence]] techniques that are used to solve optimization problems. &lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization was introduced by Kennedy and Eberhart (1995). It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research. &lt;br /&gt;
&lt;br /&gt;
Within the field of computer graphics, the first antecedents of particle swarm optimization can be traced back to the work of Reeves (1983), who proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules. Some years later, Reynolds (1987) used a particle system to simulate the collective behavior of a flock of birds. In a similar kind of simulation, Heppner and Grenander (1990) included a &amp;quot;roost&amp;quot; that was attractive to the simulated birds. Both models inspired the set of rules that were later used in the original particle swarm optimization algorithm.&lt;br /&gt;
&lt;br /&gt;
Social psychology research was another source of inspiration in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of human social behavior in which individuals adjust their beliefs and attitudes to conform with those of their peers (Kennedy &amp;amp; Eberhart 1995). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--The name ''particle swarm'' was chosen because the collective behavior of the particles adheres to the principles described by Millonas (1994).--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standard PSO algorithm ==&lt;br /&gt;
&lt;br /&gt;
=== Preliminaries ===&lt;br /&gt;
The problem of minimizing &amp;lt;ref name=&amp;quot;minimization&amp;quot;&amp;gt;Without loss of generality, the presentation considers only minimization problems.&amp;lt;/ref&amp;gt; &lt;br /&gt;
the function &amp;lt;math&amp;gt;f: \Theta \to \mathbb{R}&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;\Theta \subseteq \mathbb{R}^n&amp;lt;/math&amp;gt; can be stated as finding the set&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^* = \underset{\vec{\theta} \in \Theta}{\operatorname{arg\,min}} \, f(\vec{\theta}) = \{ \vec{\theta}^* \in \Theta \colon f(\vec{\theta}^*) \leq f(\vec{\theta}) \,\,\,\,\,\,\forall \vec{\theta} \in \Theta\}\,,&amp;lt;/math&amp;gt; &lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{\theta}&amp;lt;/math&amp;gt; is an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-dimensional vector that belongs to the set of feasible solutions &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt; (also called solution space). The elements of the set &amp;lt;math&amp;gt;\Theta^*&amp;lt;/math&amp;gt; are equivalent with respect to the function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Image:Topologies.png|thumb|500px|right|Example population topologies. The leftmost picture depicts a fully connected topology, that is, &amp;lt;math&amp;gt;\mathcal{N}_i = \mathcal{P} \setminus \{p_i\}\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The picture in the center depicts a so-called von Neumann topology, in which &amp;lt;math&amp;gt;|\mathcal{N}_i| = 4\,\,\forall p_i \in \mathcal{P}&amp;lt;/math&amp;gt;. The rightmost picture depicts a ring topology in which each particle is neighbor to two other particles.]]&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;\mathcal{P} = \{p_{1},p_{2},\ldots,p_{k}\}&amp;lt;/math&amp;gt; be the population of particles (also referred to as ''swarm''). &lt;br /&gt;
At any time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has a position &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and a velocity &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; associated to it. The best position that particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; has ever visited until time step &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; is represented by vector &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; (also known as a particle's ''personal best''). Moreover, a particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt; receives information from its ''neighborhood'', which is defined as the set &amp;lt;math&amp;gt;\mathcal{N}_i \subseteq \mathcal{P}&amp;lt;/math&amp;gt;. Note that a particle can belong to its own neighborhood. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph &amp;lt;math&amp;gt;G=\{V,E\}&amp;lt;/math&amp;gt;, where each vertex in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; corresponds to a particle in the swarm and each edge in &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt; establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's ''population topology''. &lt;br /&gt;
&lt;br /&gt;
=== The algorithm ===&lt;br /&gt;
The algorithm starts with the random initialization of the particles' positions and velocities within an initialization space &lt;br /&gt;
&amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;. During the main loop of the algorithm, the particles' velocities and positions &lt;br /&gt;
are iteratively updated until a stopping criterion is met. &lt;br /&gt;
&lt;br /&gt;
The update rules are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \varphi_2\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i) \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i = \vec{x}^{\,t}_i +\vec{v}^{\,t+1}_i \,,&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called ''inertia weight'' (Shi and Eberhart 1999), &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are two parameters called ''acceleration coefficients'', &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_2&amp;lt;/math&amp;gt; are two &amp;lt;math&amp;gt;n \times n&amp;lt;/math&amp;gt; diagonal matrices with in-diagonal elements distributed in the interval &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; uniformly at random. Every iteration, these matrices are regenerated, that is, &amp;lt;math&amp;gt;\vec{U}^{\,t+1}_{1,2} \neq \vec{U}^{\,t}_{1,2}&amp;lt;/math&amp;gt;. Vector &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; is the best position ever found by any particle in the neighborhood of particle &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;, that is, &amp;lt;math&amp;gt;f(\vec{l}^{\,t}_i) \leq f(\vec{b}^{\,t}_j) \,\,\, \forall p_j \in \mathcal{N}_i&amp;lt;/math&amp;gt;. If the values of &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt; are properly chosen, the algorithm is guaranteed to be stable (Clerc and Kennedy 2002).&lt;br /&gt;
&lt;br /&gt;
A pseudocode version of the standard PSO algorithm is shown below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
 :'''Inputs''' ''Objective function &amp;lt;math&amp;gt;f:\Theta \to \mathbb{R}&amp;lt;/math&amp;gt;, the initialization domain &amp;lt;math&amp;gt;\Theta^\prime \subseteq \Theta&amp;lt;/math&amp;gt;, the set of particles &amp;lt;math&amp;gt;\mathcal{P} \colon |\mathcal{P}| = k&amp;lt;/math&amp;gt;,'' &lt;br /&gt;
 ''the parameters &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\varphi_1&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\varphi_2&amp;lt;/math&amp;gt;, and the stopping criterion &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;''&lt;br /&gt;
 :'''Output''' ''Best solution found''&lt;br /&gt;
   &lt;br /&gt;
  // Initialization&lt;br /&gt;
  Set t := 0&lt;br /&gt;
  for i := 1 to k do&lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\mathcal{N}_i&amp;lt;/math&amp;gt; to a subset of &amp;lt;math&amp;gt;\mathcal{P}&amp;lt;/math&amp;gt; according to the desired topology &lt;br /&gt;
     Initialize &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{v}^{\,t}_i&amp;lt;/math&amp;gt; randomly within &amp;lt;math&amp;gt;\Theta^\prime&amp;lt;/math&amp;gt;&lt;br /&gt;
     Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i = \vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
  end for&lt;br /&gt;
  &lt;br /&gt;
  // Main loop&lt;br /&gt;
  while &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is not satisfied do&lt;br /&gt;
     &lt;br /&gt;
     // Velocity and position update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{l}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;p_i&amp;lt;/math&amp;gt;'s best neighbor according to &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;&lt;br /&gt;
        Generate random matrices &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{U}^{\,t}_1&amp;lt;/math&amp;gt; &lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;w\vec{v}^{\,t}_i + \varphi_1\vec{U}^{\,t}_1(\vec{b}^{\,t}_i - \vec{x}^{\,t}_i) + \varphi_2\vec{U}^{\,t}_2(\vec{l}^{\,t}_i - \vec{x}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
        Set &amp;lt;math&amp;gt;\vec{x}^{\,t+1}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i + \vec{v}^{\,t+1}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     // Solution update loop&lt;br /&gt;
     for i := 1 to k do&lt;br /&gt;
        if &amp;lt;math&amp;gt;f(\vec{x}^{\,t}_i) \leq f(\vec{b}^{\,t}_i)&amp;lt;/math&amp;gt;&lt;br /&gt;
            Set &amp;lt;math&amp;gt;\vec{b}^{\,t}_i&amp;lt;/math&amp;gt; := &amp;lt;math&amp;gt;\vec{x}^{\,t}_i&amp;lt;/math&amp;gt;&lt;br /&gt;
        end if&lt;br /&gt;
     end for&lt;br /&gt;
     &lt;br /&gt;
     Set t := t + 1&lt;br /&gt;
     &lt;br /&gt;
  end while&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Main PSO variants ==&lt;br /&gt;
&lt;br /&gt;
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of these changes affect the way the particles' velocity is updated. In the following subsections, we briefly describe some of the most important developments. For a more detailed description of many of the existing particle swarm optimization variants, see (Kennedy and Eberhart 2001, Engelbrecht 2005, Clerc 2006 and Poli et al. 2007).&lt;br /&gt;
&lt;br /&gt;
=== Discrete PSO ===&lt;br /&gt;
&lt;br /&gt;
Most particle swarm optimization algorithms are designed to search in continuous domains. However, there are a number of variants that operate in discrete spaces. The first variant that worked on discrete domains was the binary particle swarm optimization algorithm (Kennedy and Eberhart 1997). In this algorithm, a particle's position is discrete but its velocity is continuous. The &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of a particle's velocity vector is used to compute the probability with which the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th component of the particle's position vector takes a value of 1. Velocities are updated as in the standard PSO algorithm, but positions are updated using the following rule&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	x^{t+1}_{ij} = &lt;br /&gt;
	\begin{cases} &lt;br /&gt;
		1 &amp;amp; \mbox{if } r &amp;lt; sig(v^{t+1}_{ij}),\\&lt;br /&gt;
		0 &amp;amp; \mbox{otherwise,}&lt;br /&gt;
	\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; is a uniformly distributed random number in the range &amp;lt;math&amp;gt;[0,1)\,&amp;lt;/math&amp;gt; and &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
	sig(x) = \frac{1}{1+e^{-x}}\,.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other approaches to tackle discrete problems include transforming the continuous domain into discrete sets of intervals (Fukuyama et al. 1999), rounding off the components of the particles' position vectors (Laskari et al. 2002), and redefining the mathematical operators used in the velocity- and position-update rules to suit a chosen problem representation (Clerc 2004).&lt;br /&gt;
&lt;br /&gt;
=== Bare bones PSO ===&lt;br /&gt;
&lt;br /&gt;
The ''bare-bones particle swarm'' (Kennedy 2003) is a variant of the particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function. &lt;br /&gt;
&lt;br /&gt;
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th dimension is&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is a normal distribution with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{array}{ccc}&lt;br /&gt;
\mu^{t} &amp;amp;=&amp;amp; \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\&lt;br /&gt;
\sigma^{t} &amp;amp; = &amp;amp; |b^{t}_{ij} - l^{t}_{ij}| \,.&lt;br /&gt;
\end{array}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although a normal distribution was originally used in the bare bones model, other probability density functions can make it more competitive (Richer and Blackwell 2006).&lt;br /&gt;
&lt;br /&gt;
=== Fully informed PSO ===&lt;br /&gt;
&lt;br /&gt;
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the ''fully informed particle swarm'' (FIPS) (Mendes et al. 2004).&lt;br /&gt;
	&lt;br /&gt;
In the fully informed particle swarm optimization algorithm, the velocity-update rule is &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}^{\,t+1}_i = w\vec{v}^{\,t}_i + \frac{\varphi}{|\mathcal{N}_i|}\sum_{p_j \in \mathcal{N}_i}\mathcal{W}(\vec{b}^{\,t}_j)\vec{U}^{\,t}_j(\vec{b}^{\,t}_j-\vec{x}^{\,t}_i) \,,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;w&amp;lt;/math&amp;gt; is a parameter called the ''inertia weight'', &amp;lt;math&amp;gt;\varphi&amp;lt;/math&amp;gt; is a parameter called ''acceleration coefficient'', and &amp;lt;math&amp;gt;\mathcal{W} \colon \Theta \to [0,1]&amp;lt;/math&amp;gt; is a function that weighs the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.&lt;br /&gt;
&lt;br /&gt;
== Applications of PSO ==&lt;br /&gt;
&lt;br /&gt;
Particle swarm optimization algorithms have been used successfully in the solution of single and multiobjective problems (Reyes-Sierra and Coello Coello 2006). The first practical application of a PSO algorithm was in the field of neural network training and was published together with the algorithm itself (Kennedy and Eberhart 1995). Many more areas of application have been explored ever since, including telecommunications, control, data mining, design, combinatorial optimization, power systems, signal processing, and many others. To date, there are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, see (Poli 2008).&lt;br /&gt;
&lt;br /&gt;
== Current Research Issues ==&lt;br /&gt;
&lt;br /&gt;
Current research in particle swarm optimization is done along several lines. Some of them are: &lt;br /&gt;
&lt;br /&gt;
\begin{description}&lt;br /&gt;
	\item[Theory.] Understanding how particle swarm optimizers work, from a theoretical point of view, has been the subject of active &lt;br /&gt;
	research in the last years. The first efforts were directed toward understanding the effects of the different parameters in the &lt;br /&gt;
	behavior of the standard particle swarm optimization algorithm~\cite{Ozcan99,Clerc02,Trelea03}. In recent years there &lt;br /&gt;
	has been a particular interest in studying the stochastic properties of the pair of stochastic equations that govern the movement of a particle~\cite{Blackwell07,Poli08b,Pena08}.&lt;br /&gt;
&lt;br /&gt;
		&lt;br /&gt;
	\item[New variants.] This line of research has been the most active since the proposal of the first particle swarm algorithm. &lt;br /&gt;
	New particle position-update mechanisms are continuously proposed in an effort to design ever better performing particle swarms. &lt;br /&gt;
	Some efforts are directed toward understanding on which classes of problems particular particle swarm algorithms can be top &lt;br /&gt;
	performers~\cite{Poli05a,Poli05b,Langdon07}.  Also of interest is the study of hybridizations between particle swarms and other &lt;br /&gt;
	high-performing optimization algorithms~\cite{Lovbjerg01,Naka03}. Recently, some parallel implementations have been studied~\cite{Schutte04,Koh06}&lt;br /&gt;
	and due to the wider availability of parallel computers, we should expect more research to be done along this line.  &lt;br /&gt;
	&lt;br /&gt;
	\item[Performance evaluation.] Due to the great number of new variants that are frequently proposed, performance comparisons have been always&lt;br /&gt;
	of interest. Comparisons between the standard particle swarm and other optimization techniques, like the ones in ~\cite{Eberhart98,Hassan05}, &lt;br /&gt;
	have triggered the interest of many researchers on particle swarm optimization. Comparisons between different particle &lt;br /&gt;
	swarm optimization variants or empirical parameter settings studies help in improving our understanding of the technique and &lt;br /&gt;
	many times trigger empirical and theoretical work~\cite{Mendes04,Schutte05,MdeO06}. &lt;br /&gt;
		&lt;br /&gt;
	\item[Applications.] It is expected that in the future more applications of the particle swarm optimization algorithm will be considered.&lt;br /&gt;
	Much of the work done on other aspects of the paradigm will hopefully allow us to solve practically relevant problems in many domains. &lt;br /&gt;
		&lt;br /&gt;
\end{description}&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
M. Clerc and J. Kennedy. The particle swarm-explosion, stability and convergence in a multidimensional complex space. ''IEEE Transactions on Evolutionary Computation'', 6(1):58-73, 2002.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. Discrete particle swarm optimization, illustrated by the traveling salesman problem. In ''New Optimization Techniques in Engineering'', pages 219-239. Springer, Berlin, Germany, 2004.&lt;br /&gt;
&lt;br /&gt;
M. Clerc. ''Particle Swarm Optimization''. ISTE, London, UK, 2006.&lt;br /&gt;
&lt;br /&gt;
A. P. Engelbrecht. ''Fundamentals of Computational Swarm Intelligence''. John Wiley &amp;amp; Sons, Chichester, UK, 2005.&lt;br /&gt;
&lt;br /&gt;
Y. Fukuyama, S. Takayama, Y. Nakanishi, and H. Yoshida. A particle swarm optimization for reactive power and voltage control in electric power systems. In ''Proceedings of the Genetic and Evolutionary Computation Conference'', pages 1523-1528, Morgan Kaufmann, San Francisco,CA, 1999.&lt;br /&gt;
&lt;br /&gt;
F. Heppner and U. Grenander. A stochastic nonlinear model for coordinated bird flocks. ''The Ubiquity of Chaos''. AAAS Publications, Washington, DC, 1990.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy. Bare bones particle swarms. In ''Proceedings of the IEEE Swarm Intelligence Symposium'', pages 80-87, IEEE Press, Piscataway, NJ, 2003.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. Particle swarm optimization. In ''Proceedings of IEEE International Conference on Neural Networks'', pages 1942-1948, IEEE Press, Piscataway, NJ, 1995.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy and R. Eberhart. A discrete binary version of the particle swarm algorithm. In ''Proceedings of the IEEE International Conference on Systems, Man and Cybernetics'', pages 4104-4108, IEEE Press, Piscataway, NJ, 1997.&lt;br /&gt;
&lt;br /&gt;
J. Kennedy, and R. Eberhart. ''Swarm Intelligence''. Morgan Kaufmann, San Francisco, CA, 2001.&lt;br /&gt;
&lt;br /&gt;
E. C. Laskari, K. E. Parsopoulos, and M. N. Vrahatis. Particle swarm optimization for integer programming. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 1582-1587, IEEE Press, Piscataway, NJ, 2002.&lt;br /&gt;
&lt;br /&gt;
R. Mendes, J. Kennedy, and J. Neves. The fully informed particle swarm: simpler, maybe better. ''IEEE Transactions on Evolutionary Computation'', 8(3):204-210, 2004.&lt;br /&gt;
&lt;br /&gt;
R. Poli. Analysis of the publications on the applications of particle swarm optimisation. ''Journal of Artificial Evolution and Applications'', Article ID 685175, 10 pages, 2008.&lt;br /&gt;
&lt;br /&gt;
R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. An overview. ''Swarm Intelligence'', 1(1):33-57, 2007.&lt;br /&gt;
&lt;br /&gt;
W. T. Reeves. Particle systems-a technique for modeling a class of fuzzy objects. ''ACM Transactions on Graphics'', 2(2):91-108, 1983.&lt;br /&gt;
&lt;br /&gt;
M. Reyes-Sierra, M. and C. A. Coello Coello. Multi-objective particle swarm optimizers: A survey of the state-of-the-art. ''International Journal of Computational Intelligence Research'', 2(3):287-308, 2006.&lt;br /&gt;
&lt;br /&gt;
T. J. Richer and T. M. Blackwell. The LÃ©vy Particle Swarm. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 808-815, IEEE Press, Piscataway, NJ, 2006.&lt;br /&gt;
&lt;br /&gt;
C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. ''ACM Computer Graphics'',21(4):25-34, 1987.&lt;br /&gt;
&lt;br /&gt;
Y. Shi and R. Eberhart. A modified particle swarm optimizer. In ''Proceedings of the IEEE Congress on Evolutionary Computation'', pages 69-73, IEEE Press, Piscataway, NJ, 1999.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
* Many journals and conferences publish papers on PSO:&lt;br /&gt;
** The main journal reporting research on PSO is [http://www.springer.com/11721 Swarm Intelligence]. Other journals where papers on PSO regularly appear are IEEE Transactions on Evolutionary Computation, etc.&lt;br /&gt;
** GECCO, ANTS, IEEE SIS, Evo\*&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
[[Optimization]], [[Stochastic Optimization]], [[Swarm Intelligence]], [[Ant Colony Optimization]]&lt;br /&gt;
&lt;br /&gt;
[[Category: Computational Intelligence]]&lt;br /&gt;
[[Category: Artificial Intelligence]]&lt;br /&gt;
[[Category:Artificial Life]]&lt;/div&gt;</summary>
		<author><name>Mmontes</name></author>
	</entry>
</feed>