Mathematical Biology Seminar

Jack Bowler, University of Utah,
Tuesday, Oct. 15, 2024
2:00pm in LCB 323
Effect of experience on context-dependent learning in recurrent networks

Abstract: Learning new tasks is thought to be compositional in nature, with the acquisition of new skills relying on the assembly of relevant past experiences. But how is relevance of previous tasks established? To answer this question, we trained recurrent neural networks (RNNs) on a temporal variant of the delay non-match to sample (tDNMS) task. This task depends on the entorhinal cortex and requires learning distinct temporal contexts that shape decision making. Units within RNNs trained on this task exhibit sequences of sparsely active time-fields, highly reminiscent of time cell activity recorded in the neural dynamics of animals performing the task. Further, the training curriculum has a lasting impact on model performance. Pre-training RNNs on a shaping task involving only non-match temporal contexts results in improved learning of the full task once the matched context is introduced. Following shaping, RNNs develop more abstract representations of time and distinct response and timing dynamics are observed. The shaping procedure is nearly identical to the training tasks used to teach animals to perform the tDNMS task and the modeling approach provides predictions about the population dynamics animals must develop in order to fully solve the complex timing-based tasks.