Pamitc

CVPR14 Plenary Speakers

Neural mechanisms for face processing
How the brain distills a representation of meaningful objects from retinal input is one of the central challenges of systems neuroscience. Functional imaging experiments in the macaque reveal that one ecologically important class of objects, faces, is represented by a system of six discrete, strongly interconnected regions. Electrophysiological recordings show that these 'face patches' have unique functional profiles. By studying the distinct visual representations maintained in these six face patches, the sequence of information flow between them, and the role each plays in face perception, we are gaining new insights into hierarchical information processing in the brain.



Are Deep Networks a Solution to Curse of Dimensionality?
Learning gave a considerable and surprising boost to computer vision, and deep neural networks appear to be the new winners of the fierce race on classification errors. Algorithm refinements are now going well beyond our understanding of the problem, and seem to make irrelevant any study of computer vision models.

Yet, learning from high-dimensional data such as images, suffers from a curse of dimensionality which predicts a combinatorial explosion. Why are these neural architectures avoiding this curse ? Is this rooted in properties of images and visual tasks ? Can these properties be related to high-dimensional problems in other fields ? We shall explore the mathematical roots of these questions, and tell a story where invariants, contractions, sparsity, dimension reduction and multiscale analysis play important roles. Images and examples will give a colorful background to the talk.