Noémi Éltető

Title:

Action Sequences in Animals and Machines

Abstract:

I will give a whirlwind tour of four of my PhD studies exploring how action sequences underlie efficient behavior in animals and how this inspires the development of artificial agents. Animals, as opposed to artificial agents such as LLMs use parsimonious sequence representations. In my first study, we showed how humans learned motor skills when those skills were composed of actions with long-range dependencies. Humans used a progressively deepening action sequence context to build their skill. In my second study, we uncovered variable-range dependencies in the syllable sequences of Bengalese finch songs. Inspired by habitual action sequence reuse in the animal world, we augmented state-of-the-art artificial agents with sequence models as priors. We showed that action sequence compressibility as auxiliary reward improves model-free reinforcement learning for locomotion control. In planning, action sequence priors can be used to form macro-actions and thus enable deeper tree search using less compute. Taken together, we found that habitually reusing action sequences facilitates faster skill learning and planning, helping to close the compute efficiency gap between natural and artificial intelligence.