Sunday, March 20, 2011

Carl Smith : March 22

This week in group meeting I will be presenting a somewhat recent paper from Josh Tenenbaum's group entitled "Modelling Relational Data using Bayesian Clustered Tensor Factorization", in which a model for relational data is proposed and explored and argued to be a happy compromise of the pros and cons of clustering methods and factorization models. I plan to present the model itself, some issues it addresses, and some of the results described in the paper.

Monday, March 14, 2011

Eizaburo Doi : March 15

I will discuss the details of the following paper:
B. G. Borghuis, C. P. Ratliff, R. G. Smith, P. Sterling, and V. Balasubramanian. Design of a neuronal array. Journal of Neuroscience, 28:3178–3189, 2008.

I'd also mention a couple of related papers, including those cited in:
T. E. Holy. ”Yes! We’re all individuals!”: redundancy in neuronal circuits. Nature Neuroscience, 13:1306–1307, 2010.

Basically I plan to lead a discussion of efficient coding, population coding, redundancies in neural populations, and retinal coding.  This is partly because we're finishing a journal draft on this topic.  It would be great if you could bring any other papers that you'd like to discuss.

Monday, March 7, 2011

Kolia Sadeghi : March 8

At COSYNE, Cadieu and Koepsell had an interesting poster on joint models of amplitude and phase couplings between LFPs of different areas.  There is a paper out on experimental findings [pdf] [supplement], and older papers on estimating models of joint phase couplings [pdf], both of which are interesting.  The model including amplitudes is poster only for now, so I'll go over those papers quickly first.

Fritz Sommer's Adaptive compressive Sensing is good to have seen at least once, so I'll go over it quickly as well if time allows.

Thursday, March 3, 2011

Adaptive Compressive Sensing

Fritz Sommer gave a COSYNE 2011 workshop presentation of seemingly magical results coauthored by Guy Isely and Christopher Hillar.

Suppose an area of the brain deals in a signal which is sparse in some underlying unknown dictionary.  This area subsamples the signal with say a random measurement matrix, and sends the subsampled signal to another area.  The receiving area doesn't know what the original signals were, or what the underlying sparsifying dictionary was, or what the measurement matrix were; all it knows are the subsampled measurements it has received.  If the receiving area learns a dictionary in which the subsampled signals it received are sparse, can this sparse representation also be used to linearly represent the original signal?  The answer is yes.

To restore normality and disprove magic, read their NIPS paper.  Apparently a longer paper with proofs is due to come out soon.