### Abstract

Bayesian algorithms pose a limit to the performance learning algorithms can achieve. Natural selection should guide the evolution of information processing systems towards those limits. What can we learn from this evolution and what properties do the intermediate stages have? While this question is too general to permit any answer, progress can be made by restricting the class of information processing systems under study. We present analytical and numerical results for the evolution of on-line algorithms for learning from examples for neural network classifiers, which might include or not a hidden layer. The analytical results are obtained by solving a variational problem to determine the learning algorithm that leads to maximum generalization ability. Simulations using evolutionary programming, for programs that implement learning algorithms, confirm and expand the results. The principal result is not just that the evolution is towards a Bayesian limit. Indeed it is essentially reached. In addition we find that evolution is driven by the discovery of useful structures or combinations of variables and operators. In different runs the temporal order of the discovery of such combinations is unique. The main result is that combinations that signal the surprise brought by an example arise always before combinations that serve to gauge the performance of the learning algorithm. This latter structures can be used to implement annealing schedules. The temporal ordering can be understood analytically as well by doing the functional optimization in restricted functional spaces. We also show that there is data suggesting that the appearance of these traits also follows the same temporal ordering in biological systems.

Original language | English |
---|---|

Title of host publication | Bayesian inference and maximum entropy methods in science and engineering |

Editors | Ali Mohammad-Djafari |

Publisher | AIP |

Pages | 203-210 |

Number of pages | 8 |

ISBN (Print) | 978-0-7354-0371-6 |

DOIs | |

Publication status | Published - 29 Dec 2006 |

Event | Bayesian inference and maximum entropy methods In science and engineering - Paris, France Duration: 8 Jul 2006 → 13 Jul 2006 |

### Publication series

Name | AIP conference proceedings |
---|---|

Publisher | AIP |

Volume | 872 |

ISSN (Print) | 0094-243X |

ISSN (Electronic) | 1551-7616 |

### Conference

Conference | Bayesian inference and maximum entropy methods In science and engineering |
---|---|

Country | France |

City | Paris |

Period | 8/07/06 → 13/07/06 |

### Fingerprint

### Cite this

*Bayesian inference and maximum entropy methods in science and engineering*(pp. 203-210). (AIP conference proceedings; Vol. 872). AIP. https://doi.org/10.1063/1.2423276

}

*Bayesian inference and maximum entropy methods in science and engineering.*AIP conference proceedings, vol. 872, AIP, pp. 203-210, Bayesian inference and maximum entropy methods In science and engineering, Paris, France, 8/07/06. https://doi.org/10.1063/1.2423276

**The evolution of learning systems : to Bayes or not to be.** / Caticha, Nestor; Neirotti, Juan Pablo.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

TY - GEN

T1 - The evolution of learning systems

T2 - to Bayes or not to be

AU - Caticha, Nestor

AU - Neirotti, Juan Pablo

PY - 2006/12/29

Y1 - 2006/12/29

N2 - Bayesian algorithms pose a limit to the performance learning algorithms can achieve. Natural selection should guide the evolution of information processing systems towards those limits. What can we learn from this evolution and what properties do the intermediate stages have? While this question is too general to permit any answer, progress can be made by restricting the class of information processing systems under study. We present analytical and numerical results for the evolution of on-line algorithms for learning from examples for neural network classifiers, which might include or not a hidden layer. The analytical results are obtained by solving a variational problem to determine the learning algorithm that leads to maximum generalization ability. Simulations using evolutionary programming, for programs that implement learning algorithms, confirm and expand the results. The principal result is not just that the evolution is towards a Bayesian limit. Indeed it is essentially reached. In addition we find that evolution is driven by the discovery of useful structures or combinations of variables and operators. In different runs the temporal order of the discovery of such combinations is unique. The main result is that combinations that signal the surprise brought by an example arise always before combinations that serve to gauge the performance of the learning algorithm. This latter structures can be used to implement annealing schedules. The temporal ordering can be understood analytically as well by doing the functional optimization in restricted functional spaces. We also show that there is data suggesting that the appearance of these traits also follows the same temporal ordering in biological systems.

AB - Bayesian algorithms pose a limit to the performance learning algorithms can achieve. Natural selection should guide the evolution of information processing systems towards those limits. What can we learn from this evolution and what properties do the intermediate stages have? While this question is too general to permit any answer, progress can be made by restricting the class of information processing systems under study. We present analytical and numerical results for the evolution of on-line algorithms for learning from examples for neural network classifiers, which might include or not a hidden layer. The analytical results are obtained by solving a variational problem to determine the learning algorithm that leads to maximum generalization ability. Simulations using evolutionary programming, for programs that implement learning algorithms, confirm and expand the results. The principal result is not just that the evolution is towards a Bayesian limit. Indeed it is essentially reached. In addition we find that evolution is driven by the discovery of useful structures or combinations of variables and operators. In different runs the temporal order of the discovery of such combinations is unique. The main result is that combinations that signal the surprise brought by an example arise always before combinations that serve to gauge the performance of the learning algorithm. This latter structures can be used to implement annealing schedules. The temporal ordering can be understood analytically as well by doing the functional optimization in restricted functional spaces. We also show that there is data suggesting that the appearance of these traits also follows the same temporal ordering in biological systems.

UR - http://www.scopus.com/inward/record.url?scp=33845623445&partnerID=8YFLogxK

UR - http://scitation.aip.org/content/aip/proceeding/aipcp/10.1063/1.2423276

U2 - 10.1063/1.2423276

DO - 10.1063/1.2423276

M3 - Conference contribution

AN - SCOPUS:33845623445

SN - 978-0-7354-0371-6

T3 - AIP conference proceedings

SP - 203

EP - 210

BT - Bayesian inference and maximum entropy methods in science and engineering

A2 - Mohammad-Djafari, Ali

PB - AIP

ER -