Search articles

4 items found
26 Jul 2022
article picture

Functional correlates of immediate early gene expression in mouse visual cortex

Bringing together immediate early genes and sensorimotor response properties in V1

Recommended by and based on reviews by Balázs Hangya and 2 anonymous reviewers

The primary visual cortex (V1) does not just process vision: it also integrates self-generated motion signals (Niell, Stryker 2010; Keller et al. 2012; Saleem et al. 2013; Vélez-Fort et al. 2018; Meyer et al. 2018), enabling us to match our actions to the world we see. We know that the development of visuomotor representation in V1 depends on experience (Attinger et al. 2017; Widmer et al. 2022), but how exactly does each neuron acquire the right balance of visual and motor input? And how do some neurons become more responsive to visual or motor signals? Mahringer et al. (Mahringer et al. 2022) suspected that the answers may lie in experience-specific plasticity mechanisms. 

To investigate this, the authors measured the expression of immediate early genes (IEGs) as indicators of both past neural activity and future plasticity. They examined three IEGs previously implicated in visual cortical plasticity: c-fos, egr1 and Arc (Yamada et al. 1999; Wang et al. 2006; Xie et al. 2014). In three separate transgenic mouse lines, GFP expression was driven by these IEGs, and a red variant of a genetically encoded calcium indicator allowed for simultaneous measurement of neuronal activity. Initial characterisation of IEG expression and calcium fluorescence revealed that IEG levels were only weakly (positively) correlated with visually-evoked neural activity. 

But what about the relationship between IEG expression and first visual or visuomotor experience? In dark-reared mice, first visual and visuomotor experiences led to differential IEG expression: Arc expression increased after first visual and visuomotor experiences; EGR1 expression decreased after first visuomotor experience; and c-Fos expression remained largely unchanged. Neural activity levels could not account for these changes, suggesting that different sensory experiences can selectively recruit different IEG expression patterns, perhaps according to input pathway.

Further analysis of those neurons with the highest levels of IEG expression revealed that different IEGs were associated with different functional response properties. High Arc-expressing neurons developed above-average visual and below-average motor responses, while high EGR1-expressing neurons developed above-average motor responses. These results suggest that during experience-dependent wiring, Arc expression drives plasticity favouring bottom-up visual input, while EGR1 expression drives plasticity favouring top-down motor input. Interestingly, while Arc-expressing neurons appear to end up with little-to-no motor input, EGR1-expressing neurons appear to enjoy both visual and motor input, enabling them to display above-average visuomotor mismatch responses. 

Overall, this work makes two important advances. First, it suggests that IEG expression may be more closely linked to specific forms of plasticity than general levels of neural activity. Second, it reveals a mechanism by which visual cortical neurons can acquire specific functional properties by selectively upregulating bottom-up or top-down inputs in response to particular sensory experiences. 

As an additional note, we would like to highlight a vigorous technical discussion that this manuscript triggered: unconventionally, the authors chose not to apply a neuropil correction procedure to their calcium imaging data. This decision split opinion, amongst both reviewers and recommenders. We have come to the view that the findings are nevertheless of interest for the community and are pleased to point readers towards the publicly available reviews and authors’ responses. 


Attinger A, Wang B, Keller GB (2017) Visuomotor Coupling Shapes the Functional Development of Mouse Visual Cortex. Cell, 169, 1291-1302.e14.

Keller GB, Bonhoeffer T, Hübener M (2012) Sensorimotor mismatch signals in primary visual cortex of the behaving mouse. Neuron, 74, 809–815.

Mahringer D, Zmarz P, Okuno H, Bito H, Keller GB (2022) Functional correlates of immediate early gene expression in mouse visual cortex. bioRxiv, 2020.11.12.379909, ver. 4 peer-reviewed and recommended by Peer community in Neuroscience.

Meyer AF, Poort J, O’Keefe J, Sahani M, Linden JF (2018) A Head-Mounted Camera System Integrates Detailed Behavioral Monitoring with Multichannel Electrophysiology in Freely Moving Mice. Neuron, 100, 46-60.e7.

Niell CM, Stryker MP (2010) Modulation of Visual Responses by Behavioral State in Mouse Visual Cortex. Neuron, 65, 472–479.

Saleem AB, Ayaz A, Jeffery KJ, Harris KD, Carandini M (2013) Integration of visual motion and locomotion in mouse visual cortex. Nature Neuroscience, 16, 1864–1869.

Vélez-Fort M, Bracey EF, Keshavarzi S, Rousseau CV, Cossell L, Lenzi SC, Strom M, Margrie TW (2018) A Circuit for Integration of Head- and Visual-Motion Signals in Layer 6 of Mouse Primary Visual Cortex. Neuron, 98, 179-191.e6.

Wang KH, Majewska A, Schummers J, Farley B, Hu C, Sur M, Tonegawa S (2006) In Vivo Two-Photon Imaging Reveals a Role of Arc in Enhancing Orientation Specificity in Visual Cortex. Cell, 126, 389–402.

Widmer FC, O’Toole SM, Keller GB (2022) NMDA receptors in visual cortex are necessary for normal visuomotor integration and skill learning. eLife, 11, e71476.

Xie H, Liu Y, Zhu Y, Ding X, Yang Y, Guan J-S (2014) In vivo imaging of immediate early gene expression reveals layer-specific memory traces in the mammalian brain. Proceedings of the National Academy of Sciences, 111, 2788–2793.

Yamada Y, Hada Y, Imamura K, Mataga N, Watanabe Y, Yamamoto M (1999) Differential expression of immediate-early genes, c-fos and zif268, in the visual cortex of young rats: effects of a noradrenergic neurotoxin on their expression. Neuroscience, 92, 473–484.

29 Nov 2021
article picture

Nonlinear computations in spiking neural networks through multiplicative synapses

Approximate any nonlinear system with spiking artificial neural networks: no training required

Recommended by based on reviews by 2 anonymous reviewers

Artificial (spiking) neural networks (ANNs) have become an important tool in the modelling of biological neuronal circuits. However, they come with caveats: their typical training can be laborious, and after it is done, the complexity of the connectivity obtained can be almost as daunting as the original biological systems we are trying to model.

In this work [1], Nardin and colleagues summarize and expand upon the Spike Coding Network (SCN) framework [2], which originally provides a direct method to derive the connectivity of a spiking ANN representing any given linear system. They generalize this framework to approximate any (non-linear) dynamical system, by yielding the connectivity necessary to represent its polynomial expansion. This is achieved by including multiplicative synapses in their network connections. They show that higher polynomial orders can be efficiently represented with hierarchical network structures. The resulting networks not only enjoy many of the desirable features of traditional ANNs, like robustness to (artificial) cell death and realistic patterns of activity, but also a much more interpretable connectivity. This is promptly leveraged to derive how densely connected a neural network of this type needs to be to be able to represent dynamical systems of different complexities.

The derivations in this work are self-contained and the mathematically inclined neuroscientist can quickly get up to speed with the new multiplicative SCN framework, without the need for prior specific knowledge of SCNs. All the code is available and well commented in making this introduction even more accessible to its readers. This paper is relevant for those interested in neural representations of dynamical systems and the possible roles for multiplicative synapses and dendritic non-linearities. Those interested in neuromorphic computations will find here an efficient and direct way of representing non-linear dynamical systems (at least those well approximated by low-order polynomials). Finally, those interested in neural temporal pattern generators might find it surprising that only 10 integrate and fire neurons can already very reasonably approximate a chaotic Lorenz system.


[1] Nardin, M., Phillips, J. W., Podlaski, W. F., and Keemink, S. W. (2021) Nonlinear computations in spiking neural networks through multiplicative synapses. arXiv, ver. 4 peer-reviewed and recommended by Peer Community in Neuroscience.

[2] Boerlin, M., Machens, C. K., and Denève, S. (2013). Predictive coding of dynamical variables in
balanced spiking networks. PLoS Comput Biol, 9(11):e1003258.

02 Sep 2021
article picture

Neurons in the mouse brain correlate with cryptocurrency price: a cautionary tale

Can a mouse understand the crypto market?

Recommended by based on reviews by Kenneth Harris, Anirudh Kulkarni and 1 anonymous reviewer

Nowadays it is pretty much accepted that in animals with a nervous system, neural activity leads to behaviour. This framework is very useful to ultimately find a satisfying explanation of how and why animals behave, as it implies that there is a causal relationship between neuronal spiking and muscle and gland activity. In order to get closer to this causation, a common approach in neuroscience is to find correlations between behavioural variables and neuronal activity. Dr. Meijer's manuscript "Neurons in the mouse brain correlate with cryptocurrency price: a cautionary tale" [1] serves as a proof of concept that neuroscientists need to be careful about the statistical tests they use when looking for these correlations.

In this work, the author considers two recent datasets containing signals that display slow continuous trends over time: neuronal spiking activity from 40,100 neurons, and Bitcoin and Ethereum prices. When testing for correlations between the activity of individual neurons and the simultaneous fluctuations of cryptocurrency prices, he finds that over two thirds of the neurons correlated significantly, and that classical corrective conservative methods still result in one third of the neurons showing correlation. In order to estimate the true false discovery rate of these type of comparissons, the author tested two statistical methods shown to work for simulated data [2]. He shows that also for this large-scale dataset, both the session permutation and the linear shift method manage to reduce the number of correlated neurons to statistically-acceptable levels. Additionaly, the author goes on to show that it is the slow time constant of the crypto prices that are the root for the initial correlations. This work serves as an example for how mislead scientists can be if proper statistical tests are not applied in order to avoid "nonsense correlations" with neuronal data, and it aims to increase awareness about this problem in the neuroscience community.

This rigorous and yet entertaining work can now be added to the collection of cautionary tales that include a dead salmon understanding human emotions [3] and rat cortical neurons predicting stock market prices [4]. At the very least, it can be a piece of advice for Elon Musk to wait for more evidence before merging two of his new recent interests.


[1] Meijer, Guido. (2021). Neurons in the Mouse Brain Correlate with Cryptocurrency Price: A Cautionary Tale. PsyArXiv, ver. 3 peer-reviewed and recommended by Peer Community in Circuit Neuroscience.

[2] K. D. Harris. (2020). Nonsense correlations in neuroscience. bioRxiv. 402719.


[4] T. Marzullo, C. Miller, and D. Kipke. (2016). Stock Market Behavior Predicted by Rat Neurons2. Annals of Improbable Research 12, 401. PDF Link

20 Apr 2021
article picture

A quick and easy way to estimate entropy and mutual information for neuroscience

Estimating the entropy of neural data by saving them as a .png file

Recommended by , and based on reviews by Federico Stella and 2 anonymous reviewers

Entropy and mutual information are useful metrics for quantitative analyses of various signals across the sciences including neuroscience (Verdú, 2019). The information that a neuron transfers about a sensory stimulus is just one of many examples of this. However, estimating the entropy of neural data is often difficult due to limited sampling (Tovée et al., 1993; Treves and Panzeri, 1995). This manuscript overcomes this problem with a 'quick and dirty' trick: just save the corresponding plots as PNG files and measure the file sizes! The idea is that the size of the PNG file obtained by saving a particular set of data will reflect the amount of variability present in the data and will therefore provide an indirect estimation of the entropy content of the data. 

The method the study employs is based on Shannon’s Source Coding Theorem - an approach used in the field of compressed sensing - which is still not widely used in neuroscience. The resulting algorithm is very straightforward, essentially consisting of just saving a figure of your data as a PNG file. Therefore it provides a useful tool for a fast and computationally efficient evaluation of the information content of a signal, without having to resort to more math-heavy methods (as the computation is done “for free” by the PNG compression software). It also opens up the possibility to pursue a similar strategy with other (than PNG) image compression software. The main limitation is that the PNG conversion method presented here allows only a relative entropy estimation: the size of the file is not the absolute value of entropy, due to the fact that the PNG algorithm also involves filtering for 2D images. 

The study comprehensively reviews the use of entropy estimation in circuit neuroscience, and then tests the PNG method against other math-heavy methods, which have also been made accessible elsewhere (Ince et al., 2010). The study demonstrates use of the method in several applications.  First, the mutual information between stimulus and neural response in whole-cell and unit recordings is estimated. Second, the study applies the method to experimental situations with less experimental control - such as recordings of hippocampal place cells (O’Keefe & Dostrovsky,1971) as animals freely explore an environment. The study shows the method can replicate previously established metrics in the field (e.g. Skaggs information, Skaggs et al. 1993). Importantly, it does this while making fewer assumptions on the data than traditional methods. Third, he study extends the use of the method to imaging data of neuronal morphology, such as charting the growth stage of neuronal cultures. However, the radial entropy of a dendritic tree seems at first more difficult to interpret than the common Sholl analysis of radial crossings of dendrite segments (Figure 6Ac of Zbili and Rama, 2021). As the authors note, a similar technique is used in paleobiology to discriminate pictures of biogenic rocks from abiogenic ones (Wagstaff and Corsetti, 2010). Perhaps neuronal subtypes could also be easily distinguished through PNG file size (Yuste et al., 2020). These examples are generally promising and creative applications.The authors used open source software and openly shared their code so anyone can give it a spin (

We were inspired by the wide applicability of the presented back-of-the-envelope technique, so we used it in a situation that the study had not tested: namely,  the dissection of microcircuits via optogenetic tagging of target neurons. In this process, one is often confronted with the problem that not only the opsin-carrying cells will spike in response to light, but also other nearby neurons which are activated synaptically (via the opto-tagged cell). Separating these two types of responses is typically done using a latency or jitter analysis, which requires the experimenter subjectively searching for detection parameters. Therefore a rapid and objective technique is preferable. The PNG rate difference method on slice whole cell recordings of opsin tagged neurons revealed higher mutual information metrics for direct optogenetic activation than for postsynaptic responses, showing the method can be easily used to objectively segregate different spike triggers. 


Figure caption: Using a PNG entropy metric to distinguish between direct optogenetic responses and postsynaptic excitatory responses. Left, PNG rate difference calculated for whole cell recordings of optogenetic activation in brain slices. About 20 consecutive 60ms sweeps were analysed from each of 7 postsynaptic cells and 8 directly activated cells. Analysis was performed as in Fig4B of the preprint ( using code from Right, six example traces from a cell carrying channelrhodopsin (black, top) and a cell that was excited synaptically (gray, bottom).




Ince, R.A.A., Mazzoni, A., Petersen, R.S., and Panzeri, S. (2010). Open source tools for the information theoretic analysis of neural data. Front Neurosci 4.
O'Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Research, 34(1), 171-175.
Skaggs, M. E., McNaughton, B. L., Gothard, K. M., and Markus, E. J. (1993). An information-theoretic approach to deciphering the hippocampal code. Adv. Neural Inform. Process Syst. 5, 1030-1037.
Tovée, M.J., Rolls, E.T., Treves, A., and Bellis, R.P. (1993). Information encoding and the responses of single neurons in the primate temporal visual cortex. J Neurophysiol 70, 640-654.
Treves, A., and Panzeri, S. (1995). The Upward Bias in Measures of Information Derived from Limited Data Samples. Neural Computation 7, 399-407.
Verdú, S. (2019). Empirical Estimation of Information Measures: A Literature Guide. Entropy (Basel) 21.
Wagstaff, K.L., and Corsetti, F.A. (2010). An evaluation of information-theoretic methods for detecting structural microbial biosignatures. Astrobiology 10, 363-379.
Yuste, R., Hawrylycz, M., Aalling, N., Aguilar-Valles, A., Arendt, D., Armañanzas, R., Ascoli, G.A., Bielza, C., Bokharaie, V., Bergmann, T.B., et al. (2020). A community-based transcriptomics classification and nomenclature of neocortical cell types. Nat Neurosci 23, 1456-1468.
Zbili, M., and Rama, S. (2021). A quick and easy way to estimate entropy and mutual information for neuroscience. BioRxiv 2020.08.04.236174.