Population Codes, Behavior, and Hierarchical Sparse Coding: an Unsupervised Learning Approach and its Connections to Artificial Neural Networks
Friday 31 May, 4pm at the Sherrington Library, DPAG Sherrington Building, Oxford
The Cortex Club is delighted to host A/ Professor Demba Ba from the Harvard University, who will be talking to us about his work on developing computational tools to identify neuronal populations during behaviour and deep sparse coding models to identify principles underlying hierarchical sensory processing in the brain. Please join us on May 31st, at the Sherrington Library, located in the Sherrington Building of the Department of Physiology, Anatomy and Genetics, Parks Road, Oxford.
A/ Professor Demba Ba has kindly agreed to meet students and staff individually. If you would like to arrange a meeting please contact Tai-Ying Lee at tai-ying.lee [at] dpag.ox.ac.uk.
Please also join us at the pub after the talk, to which everybody is welcome. Registration would be appreciated at: https://forms.gle/tuB9kEqwV5mapPKy8
Two important problems in neuroscience are to understand 1) how populations of neurons encode stimuli and how this encoding is related to behavior and 2) how the brain represents sensory signals hierarchically. I have developed theoretical and computational unsupervised learning tools to answer these questions. In the first part of my talk, I will describe a statistical framework to identify sub-groups of neurons within a larger population that have similar response profiles. The framework clusters multiple rasters that exhibit nonlinear dynamics into an a-priori-unknown number of functional sub-groups that each comprises rasters with similar dynamics. I will show an application to clustering neuronal responses from the prefrontal cortex of mice in an experiment designed to characterize the neural underpinnings of the observational learning of fear. The method is able to identify “empathy” clusters of neurons, namely groups of neurons that allow an observer mouse to understand when a demonstrator demonstrator is in distress. In the second part of my talk, I will describe a deep generalization of the famous sparse coding model of Olhausen and Field. I will show a strong parallel between this deep sparse coding model and deep neural networks with ReLu nonlinearities, namely that a deep neural network architecture with ReLu nonlinearities arises from a finite sequence of cascaded sparse coding models, the outputs of which, except for the last element in the cascade, are sparse and unobservable. The benefits of the deep sparse coding model are two-fold. First, it gives answers based on theory to the question “what is the complexity of learning a deep ReLu auto-encoder?”. Second, it makes experimentally-testable predictions as to the principles that may underlie hierarchical sensory processing in the brain.