David Zoltowski
David Zoltowski

Postdoctoral Scholar

Stanford University

About Me

I am a Wu Tsai Interdisciplinary Scholar and postdoc in statistics at Stanford University, where I work with Scott Linderman and David Sussillo on models of neural dynamics and on brain computer interfaces. Before that, I obtained a PhD in Neuroscience at Princeton University where I worked with Jonathan Pillow on a variety of topics in statistical neuroscience in the Pillow Lab. Previously, I received an MPhil in Engineering at the University of Cambridge as part of the Computational and Biological Learning Lab and a B.S. in Electrical Engineering from Michigan State University.

Download CV
Publications
(2024). Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems. arXiv preprint arXiv:2408.03330.
(2024). Modeling state-dependent communication between brain regions with switching nonlinear dynamical systems. The Twelfth International Conference on Learning Representations.
(2024). Structured flexibility in recurrent neural networks via neuromodulation. bioRxiv.
(2023). Competitive integration of time and reward explains value-sensitive foraging decisions and frontal cortex ramping dynamics. bioRxiv.
(2021). Neural latents benchmark'21: evaluating latent variable models of neural population activity. arXiv preprint arXiv:2109.04463.
(2021). Slice sampling reparameterization gradients. Advances in Neural Information Processing Systems.
(2020). A general recurrent state space framework for modeling neural dynamics during decision-making. Proceedings of the International Conference on Machine Learning (ICML).
(2020). Efficient non-conjugate Gaussian process factor models for spike count data using polynomial approximations. International conference on machine learning.
(2020). Modeling statistical dependencies in multi-region spike train data. Current opinion in neurobiology.
(2019). Discrete stepping and nonlinear ramping dynamics underlie spiking responses of LIP neurons during decision-making. Neuron.
(2018). Scaling the Poisson GLM to massive neural datasets through polynomial approximations. Advances in Neural Information Processing Systems.