Tuesday, September 9, 2014

Roy Fox: September 24th

Optimal Selective Attention and Action in Reactive Agents

Intelligent agents, interacting with their environment, operate under constraints on what they can observe and how they can act. Unbounded agents can use standard Reinforcement Learning to optimize their inference and control under purely external constraints. Bounded agents, on the other hand, are subject to internal constraints as well. This only allows them to partially notice their observations, and to partially intend their actions, requiring rational selection of attention and action.

In this talk we will see how to find the optimal information-constrained policy in reactive (memoryless) agents. We will discuss a number of reasons why internal constraints are often best modeled as bounds on information-theoretic quantities, and why we can focus on reactive agents with hardly any loss of generality. We will link the solution of the constrained problem to that of soft clustering, and present some of its nice properties, such as principled dimensionality reduction.

Sunday, September 7, 2014

Søren Hauberg: September 10th

Grassmann Averages for Scalable Robust PCA 

As the collection of large datasets becomes increasingly
automated, the occurrence of outliers will increase --
or in terms of buzzwords: "big data implies big outliers".
While principal component analysis (PCA) is often used
to reduce the size of data, and scalable solutions exist,
it is well-known that outliers can arbitrarily corrupt
the results. Unfortunately, state-of-the-art approaches
for robust PCA do not scale beyond small-to-medium sized
datasets. To address this, we introduce the Grassmann
Average (GA), which expresses dimensionality reduction
as an average of the subspaces spanned by the data.
Because averages can be efficiently computed, we immediately
gain scalability. GA is inherently more robust than PCA,
but we show that they coincide for Gaussian data.
We exploit that averages can be made robust to formulate
the Robust Grassmann Average (RGA) as a form of robust PCA.
Robustness can be with respect to vectors (subspaces) or
elements of vectors; we focus on the latter and use a
trimmed average. The resulting Trimmed Grassmann Average
(TGA) is particularly appropriate for computer vision
because it is robust to pixel outliers.
The algorithm has low computational complexity and minimal
memory requirements, making it scalable to "big noisy data."
We demonstrate TGA for background modeling, video restoration,
and shadow removal. We show scalability by performing robust
PCA on the entire Star Wars IV movie; a task beyond any
currently existing method.

Work in collaboration with Aasa Feragen (DIKU) and Michael
J. Black (MPI-IS).