Loughborough University
Browse
pugh_alife14.pdf (516.23 kB)

Real-time hebbian learning from autoencoder features for control tasks

Download (516.23 kB)
conference contribution
posted on 2015-03-18, 13:41 authored by Justin K. Pugh, Andrea SoltoggioAndrea Soltoggio, Kenneth O. Stanley
Neural plasticity and in particular Hebbian learning play an important role in many research areas related to artficial life. By allowing artificial neural networks (ANNs) to adjust their weights in real time, Hebbian ANNs can adapt over their lifetime. However, even as researchers improve and extend Hebbian learning, a fundamental limitation of such systems is that they learn correlations between preexisting static features and network outputs. A Hebbian ANN could in principle achieve significantly more if it could accumulate new features over its lifetime from which to learn correlations. Interestingly, autoencoders, which have recently gained prominence in deep learning, are themselves in effect a kind of feature accumulator that extract meaningful features from their inputs. The insight in this paper is that if an autoencoder is connected to a Hebbian learning layer, then the resulting Realtime Autoencoder-Augmented Hebbian Network (RAAHN) can actually learn new features (with the autoencoder) while simultaneously learning control policies from those new features (with the Hebbian layer) in real time as an agent experiences its environment. In this paper, the RAAHN is shown in a simulated robot maze navigation experiment to enable a controller to learn the perfect navigation strategy significantly more often than several Hebbian-based variant approaches that lack the autoencoder. In the long run, this approach opens up the intriguing possibility of real-time deep learning for control.

Funding

This work was partially supported through a grant from the US Army Research Office (Award No. W911NF-11- 1-0489). This paper does not necessarily reflect the position or policy of the government, and no official endorsement should be inferred. This work was also partially supported by the European Communitys Seventh Framework Programme FP7/2007-2013, Challenge 2 Cognitive Systems, Interaction, Robotics under grant agreement No. 248311 - AMARSi.

History

School

  • Science

Department

  • Computer Science

Published in

Fourteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE XIV)

Pages

? - ? (8)

Citation

PUGH, J.K., SOLTOGGIO, A. and STANLEY, K.O., 2014. Real-time Hebbian Learning from Autoencoder Features for Control Tasks. IN: Artificial Life 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE XIV). Cambridge, MA: MIT Press, pp. 202-209.

Publisher

MIT Press

Version

  • AM (Accepted Manuscript)

Publisher statement

This work is made available according to the conditions of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) licence. Full details of this licence are available at: https://creativecommons.org/licenses/by-nc-nd/4.0/

Publication date

2014

Notes

This is a conference paper.

Language

  • en

Location

NYC, USA

Usage metrics

    Loughborough Publications

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC