Tuesday, July 17, 2012

Johaness Bill: July 16th

Probabilistic inference and autonomous learning in recurrent networks of spiking neurons 

Numerous findings from cognitive science and neuroscience indicate that mammals learn and maintain an internal model of their environment, and that they employ this model during perception and decision making in a statistically optimal fashion. Indeed, recent experimental studies suggest that the required computational machinery for probabilistic inference and learning can be traced down to the level of individual spiking neurons in recurrent networks. 

At the Institute for Theoretical Computer Science in Graz, we examine (analytically and through computer simulations) how recurrent neural networks can represent complex joint probability distributions in their transient spike pattern, how external input can be integrated by networks to a Bayesian posterior distribution, and how local synaptic learning rules enable spiking neural networks to autonomously optimize their internal model of the observed input statistics. 

In the talk, I aim to discuss approaches of how recurrent spiking networks can sample from graphical models by means of their internal dynamics, and how spike-timing dependent plasticity rules can implement maximum likelihood learning of generative models.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.