On neuroscience foundation models - with Andreas Tolias - #24

Theoretical Neuroscience Podcast - Een podcast door Gaute Einevoll - Zaterdagen

The term “foundation model” refers to machine learning models that are trained on vast datasets and can be applied to a wide range of situations. The large language model GPT-4 is an example. The group of the guest has recently presented a foundation model for optophysiological responses in mouse visual cortex trained on recordings from 135.000 neurons in mice watching movies. We discuss the design, validation, use of this and future neuroscience foundation models.  

Visit the podcast's native language site