论文标题

稀疏神经数据的性能和内在种群动态的复发性神经网络学习

Recurrent Neural Network Learning of Performance and Intrinsic Population Dynamics from Sparse Neural Data

论文作者

Salatiello, Alessandro, Giese, Martin A.

论文摘要

复发性神经网络(RNN)是大脑功能的流行模型。典型的培训策略是调整其投入输出行为,以使其与感兴趣的生物电路相匹配。即使该策略确保生物学和人工网络执行相同的计算任务,但它不能保证其内部活动动态匹配。这表明训练有素的RNN可能最终采用不同的内部计算机制执行任务,这将使它们成为生物电路的次优模型。在这项工作中,我们引入了一种新颖的培训策略,该策略不仅允许学习RNN的投入输出行为,而且还可以根据稀疏的神经记录来学习其内部网络动态。我们通过训练RNN同时再现了受生理启发的神经模型的内部动力学和输出信号来测试所提出的方法。具体而言,该模型基于在运动皮层中同时观察到的振荡激活模式,生成在执行到达运动过程中通常观察到的多相肌肉样活动模式。值得注意的是,我们表明,即使训练算法依赖于从生物网络采样的一小部分神经元的活动时,内部动力学的再现也是成功的。此外,我们表明,通过这种方法训练RNN可以显着提高其概括性能。总体而言,我们的结果表明,该提出的方法适合构建强大的功能RNN模型,该模型会自动从稀疏的神经记录中捕获感兴趣的生物电路的重要计算特性。

Recurrent Neural Networks (RNNs) are popular models of brain function. The typical training strategy is to adjust their input-output behavior so that it matches that of the biological circuit of interest. Even though this strategy ensures that the biological and artificial networks perform the same computational task, it does not guarantee that their internal activity dynamics match. This suggests that the trained RNNs might end up performing the task employing a different internal computational mechanism, which would make them a suboptimal model of the biological circuit. In this work, we introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics, based on sparse neural recordings. We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model. Specifically, this model generates the multiphasic muscle-like activity patterns typically observed during the execution of reaching movements, based on the oscillatory activation patterns concurrently observed in the motor cortex. Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons sampled from the biological network. Furthermore, we show that training the RNNs with this method significantly improves their generalization performance. Overall, our results suggest that the proposed method is suitable for building powerful functional RNN models, which automatically capture important computational properties of the biological circuit of interest from sparse neural recordings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源