Evolution of Adaptation Rules

We have developed a method to evolve the ability to "learn" on-line how to solve a task, instead of evolving a solution for a task as is common practice in evolutionary computation. On this page you find an overview of the method, of several experimental results, and links to the most important articles, video clips, and other resources.

Motivation

When one encodes in the genetic string the weights of a neural network (or any other fixed parameter of a system, such as routing and cell functionalities of FPGA), evolution often comes up with solutions that are tuned to the properties of the evolutionary environment. This has three important consequences:

Goals

Method

The method consists of encoding in the genetic string a set of local Hebbian learning rules to be applied to the connection weights, but we do not encode the connections strengths. The initial weights are always set to random values and can change after each sensory-motor loop (100 ms) according to the type of learning rule specified in the corresponding genes. That means that at the beginning of "life" each individual must develop from scratch the abilities necessary to solve the task. It also means that different learning rules can co-exist within the same neural network.

The figure above compares conventional genetic encoding of synaptic weights, whereby an evolved individual is entirely genetically-determined, with genetic encoding of Hebbian learning rules whereby an evolved individual is adaptive during its entire life. Notice that in the latter case the genotype encodes also the sign and the learning rate of the synapses. Both types of genetic encoding occupy the same genetic space.

Reactive Navigation

We first proposed this approach in 1996 and applied it to a simple reactive navigation task. The goal was to evolve a neural controller capable of going as straight as possible while avoiding obstacles. The robot was put in a looping maze shown below that had been used in previous experiments with the genetically-determined approach.

This experiment showed four main results:

People

Related documents