Back to the program.
Evolving artificial neural networks with complex genetic architectures
Benjamin Inden
Max Planck Institute for Mathematics in the Sciences
On 2007-11-16 at 10:00:00 (Brussels Time)

Abstract

The way genes are interpreted biases an artificial evolutionary system towards some phenotypes. When evolving artificial neural networks, methods using direct encoding have genes representing neurons and synapses, while methods employing artificial ontogeny interpret genomes as recipes for construction of phenotypes. Here, a neuroevolution system (NEON) is presented that can emulate a well known neuroevolution method using direct encoding (NEAT), and therefore can solve the same kinds of tasks. Performance on pole balancing tasks is reported, and a new family of tasks to benchmark memory evolution is introduced. The underlying encoding used by NEON, however, is indirect, and it is shown how characteristics of artificial ontogeny can be introduced incrementally in different phases of evolutionary search. Future work is discussed that can lead to the evolution of neural networks for tasks that have both a complex dynamics and a large input/output space.

Keywords

evolutionary algorithms, artificial ontogeny, neural networks