Supervised Learning and Support Vectors for Deterministic Spiking Neurons
To signal the onset of salient sensory features or execute well-timed motor sequences, neuronal circuits must transform streams of incoming spike trains into precisely timed firing. In this talk I will investigate the efficiency and fidelity with which neurons can perform such computations. I'll present a theory that characterizes the capacity of feedforward networks to generate desired spike sequences and discuss its results and implications. Additionally, I'll present the Finite Precision algorithm: a biologically plausible learning rule that allows feedforward and recurrent networks to learn multiple mappings between inputs and desired spike sequences with preassigned required precision. This framework can be applied to reconstruct synaptic weights from spiking activity. Time permitting, I'll present further theoretical developments that extend the concept of 'large-margin' to dynamical systems with event based outputs, such as spiking neural networks. These extensions allow us to define optimal solutions that implement the required input-output transformation in a robust manner and open the way for incorporating dynamic, non-linear, spatio-temporal integration through the use of the kernel method.