ongoing

Reverse engineering nonlinear dynamics models of neural activity

Training and interpreting RNN models of neural population responses.

Note: This page is in progress. A full manuscript and code release are forthcoming.

Introduction

A central goal in systems neuroscience is to understand the latent computations underlying large-scale neural recordings. A common strategy is to fit nonlinear dynamical models, such as recurrent neural networks (RNNs) or nonlinear state-space models, that accurately predict neural population activity. While these models can achieve strong predictive performance, interpreting the learned dynamics and computations remains challenging due to their nonlinearity and high dimensionality.

This project develops a framework for reverse-engineering trained nonlinear dynamics models by co-training them with interpretable switching linear approximation. The switching linear model accurately reconstructs the nonlinear dynamics while remaining amenable to standard linear systems analysis tools. This enables principled interpretation of fixed points, local linearizations, input–state interactions, and task-relevant dynamical modes without sacrificing predictive accuracy.

I have presented preliminary versions of this work at Cosyne 2024 and the SAND 2025 Meeting, and I am currently finalizing a manuscript with a comprehensive empirical evaluation.

Model

We jointly learn:

  • a nonlinear latent dynamics model that predicts neural population activity, and
  • a switching linear dynamical system that locally approximates the nonlinear dynamics across state space.

Crucially, the switching linear model is parameter-tied to the nonlinear dynamical system and provides an accurate reconstruction of the nonlinear dynamics based on linearizations around a learned fixed point structure. We can then apply standard linear systems tools to characterize the computations and dynamics implemented in the original nonlinear model.

This work is based on the JSLDS framework introduced in Smith et al., 2021. In this work, we build adapt the co-training process, extend the approach to inferred inputs, introduce new analysis techniques based on the fit model, and showcase the approach on two different neural datasets.

Results
The graphical model combining a nonlinear dynamics model and a parameter-shared switching linear approximation.

Preliminary results: MEC analysis

We applied this framework to model neuropixel recordings from medial entorhinal cortex (MEC) of mice during a virtual spatial navigation task. In this setting, the MEC representations were found to spontaneously remap (Low et al., 2021). Using this approach, we found evidence for a putative bistable dynamics mechanism underlying remapping in MEC.

Results
Example results reverse-engineering MEC recordings during spatial navigation

The model also infers dynamical structure that explain both positional encoding as the animal navigates the environment and theta oscillations. The following figure shows a set of fixed points underlying spatial navigation. The fixed points are arranged along a ring and organized by position. This spatial attractor structure supports dynamic representations of position in neural activity.

Results
The model infers a set of fixed points organized along a ring supporting spatial representations.

Finally, in the linearized dynamics we find a consistent eigenmode supporting oscillations around 8 Hz in the theta frequency band. The following figure visualizes these oscillations over 1 second of time by projecting the high-dimensional neural state into the subspace spanned by the theta eigenmode.

Results
(left) Example eigenspectrum of linearized dynamics around a fixed point. (right) Projection of the neural state onto the theta dimensions visualizes the theta oscillations.

Current status

  • Finalizing a full manuscript with extended analyses
  • Applying the method to additional neural datasets
  • Preparing a public release of code and trained models