Loading Events

« All Events

  • This event has passed.

Seminar: Professor Christian Machens

May 16 @ 5:00 pm - 6:30 pm

Efficient codes and balanced networks
Tuesday 16th of May @ 5 pm6:30 pm Le Gros Clark Lecture Theatre

Christian Machens from the Champalimaud Centre for the Unknown, Lisbon, Portugal will talk about his research at the Cortex Club on Tuesday, 16th of May.

Facebook event

Current research focus
‘To develop models of information processing in the brain, the Theoretical Neuroscience Lab uses mathematical analysis and numerical simulations. These tools allow the researchers to formulate their ideas and intuitions in a precise manner and thereby put them to a test using real data. Specifically, the team focuses on several ‘higher-order’ regions such as the frontal cortices that are involved in turning sensory information into decisions. As part of the recent advances in the lab, the team has developed a new method that visualises how populations of neurons represent sensory information and decisions simultaneously. In addition, other advances in the lab include the development of a theory that describes how neurons communicate shared information. This theory resulted in the successful explanation of a large set of experimental observations.’

We have come to think of neural networks from a bottom-up perspective. Each neuron is characterized by an input/ output function, and a network’s computational abilities emerge as a property of the collective. While immensely successful (see the recent deep-learning craze), this view has also created several persistent puzzles in theoretical neuroscience. The first puzzle are spikes, which have largely remained a nuisance, rather than a feature of neural systems. The second puzzle is learning, which has been hard or impossible without violating the constraints of local information flow. The third puzzle is robustness to perturbations, which is a ubiquitous feature of real neural systems, but often ignored in neural network models. I am going to argue that a resolution to these puzzles comes from a top-down perspective. We make two key assumptions. First, we assume that the effective output of a neural network can be extracted via linear readouts from the population. Second, we assume that a network seeks to bound the error on a given computation, and that each neuron’s voltage represents part of this global error. Spikes are fired to keep this error in check.
These assumptions yield efficient networks that exhibit irregular and asynchronous spike trains, balance of excitatory and inhibitory currents, and robustness to perturbations. I will discuss the implications of the theory, prospects for experimental tests, and future challenges.


May 16
5:00 pm - 6:30 pm


Lecture Theatre, Le Gros Clark Building
Le Gros Clark Building
Oxford, OX1 3QX
+ Google Map