We meet on Wednesdays at 1pm, in the 10th floor conference room of the Statistics Department, 1255 Amsterdam Ave, New York, NY.
Friday, March 22, 2013
Volker Pernice: March 27th
Title: Correlations and connectivity in populations of neurons
Abstract: Due to their ubiquity in experiments and their importance for neural network dynamics and function, covariances between neural spike trains have been studied extensively. Their origin and their relation to the structure of the network of synaptic connections can be understood in the framework of linearly interacting point processes. This model also approximately describes the covariances in networks of leaky integrate-and-fire neurons. Because direct as well as indirect connections generate covariances, the solution to the inverse problem of inferring network structure from covariances is not unique. However, the ambiguity can partly be resolved under the assumption of a sparse network.
Wednesday, March 20, 2013
Josh Merel: March 20
Josh will talk about single index models and multiple index models. Some references:
- example single index model estimation - http://snowbird.djvuzone.org/2008/abstracts/183.pdf
- newer (better?) single index model estimation - http://arxiv.org/abs/1104.2018
- a video-lecture on this stuff and applications - http://videolectures.net/nipsworkshops2012_ravikumar_single/
- previous well-known work employing Bregman divergence (for reference) - http://jmlr.csail.mit.edu/papers/volume6/banerjee05b/banerjee05b.pdf
If there is time he will also review spike train kernels and propose a (new?) kernel - looking for feedback on this. Some references:
- easier version - http://arxiv.org/abs/1302.5964
- antecedent work - http://www.magicbroom.info/Papers/ShpigelmanSiPaVa05.pdf
Tuesday, March 12, 2013
Arian Maleki: March 13
Title: Minimax image denoising via anisotropic nonlocal means
Abstract: Image denoising is a fundamental primitive in image processing and computer vision. Denoising algorithms have evolved from the classical linear and median filters to more modern schemes like total variation denoising, wavelet thresholding, and bilateral filters. A particularly successful denoising scheme is the nonlocal means (NLM) algorithm, which estimates each pixel value as a weighted average of other, similar noisy pixels. I start my talk by proving that the popular nonlocal means (NLM) denoising algorithm does not "optimally" denoise images with sharp edges. Its weakness lies in the isotropic nature of the neighborhoods it uses in order to set its smoothing weights. In response, I introduce the anisotropic nonlocal means (ANLM) algorithm and prove that it is near minimax optimal for edge-dominated images from the Horizon class. On real-world test images, an ANLM algorithm that adapts to the underlying image gradients outperforms NLM by a significant margin, up to 2dB in mean square error.
Subscribe to:
Posts (Atom)