	<rdf:RDF xmlns:admin="http://webns.net/mvcb/" xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:prism="http://purl.org/rss/1.0/modules/prism/" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/">
	<channel rdf:about="https://biorxiv.org">
	<admin:errorReportsTo rdf:resource="mailto:biorxiv@cshlpress.edu"/>
	<title>bioRxiv Channel: Neuromatch Conference</title>
	<link>https://biorxiv.org</link>
	<description>
	This feed contains articles for bioRxiv Channel "Neuromatch Conference"
	</description>

		<items>
	<rdf:Seq>
		</rdf:Seq>
	</items>
	<prism:eIssn/>
	<prism:publicationName>bioRxiv</prism:publicationName>
	<prism:issn/>

	<image rdf:resource=""/>
	</channel>
	<image rdf:about="">
	<title>bioRxiv</title>
	<url/>
	<link>https://biorxiv.org</link>
	</image>
	<item rdf:about="https://biorxiv.org/cgi/content/short/2020.10.23.352021v1?rss=1">
<title>
<![CDATA[
A unified physiological framework of transitions between seizures, status epilepticus and depolarization block at the single neuron level 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.10.23.352021v1?rss=1"
</link>
<description><![CDATA[
The majority of seizures recorded in humans and experimental animal models can be described by a generic phenomenological mathematical model, The Epileptor. In this model, seizure-like events (SLEs) are driven by a slow variable and occur via saddle node (SN) and homoclinic bifurcations at seizure onset and offset, respectively. Here we investigated SLEs at the single cell level using a biophysically relevant neuron model including a slow/fast system of four equations. The two equations for the slow subsystem describe ion concentration variations and the two equations of the fast subsystem delineate the electrophysiological activities of the neuron. Using extracellular K+ as a slow variable, we report that SLEs with SN/homoclinic bifurcations can readily occur at the single cell level when extracellular K+ reaches a critical value. In patients and experimental models, seizures can also evolve into sustained ictal activity (SIA) and, depolarization block (DB), activities which are also parts of the dynamic repertoire of the Epileptor. Increasing extracellular concentration of K+ in the model to values found during experimental status epilepticus and DB, we show that SIA and DB can also occur at the single cell level. Thus, seizures, SIA and DB, which have been first identified as network events, can exist in a unified framework of a biophysical model at the single neuron level and exhibit similar dynamics as observed in the Epileptor.

Author SummaryEpilepsy is a neurological disorder characterized by the occurrence of seizures. Seizures have been characterized in patients in experimental models at both macroscopic and microscopic scales using electrophysiological recordings. Experimental works allowed the establishment of a detailed taxonomy of seizures, which can be described by mathematical models. We can distinguish two main types of models. Phenomenological (generic) models have few parameters and variables and permit detailed dynamical studies often capturing a majority of activities observed in experimental conditions. But they also have abstract parameters, making biological interpretation difficult. Biophysical models, on the other hand, use a large number of variables and parameters due to the complexity of the biological systems they represent. Because of the multiplicity of solutions, it is difficult to extract general dynamical rules. In the present work, we integrate both approaches and reduce a detailed biophysical model to sufficiently low-dimensional equations, and thus maintaining the advantages of a generic model. We propose, at the single cell level, a unified framework of different pathological activities that are seizures, depolarization block, and sustained ictal activity.
]]></description>
<dc:creator>Depannemaecker, D.</dc:creator>
<dc:creator>Ivanov, A.</dc:creator>
<dc:creator>Lillo, D.</dc:creator>
<dc:creator>Spek, L.</dc:creator>
<dc:creator>Bernard, C.</dc:creator>
<dc:creator>Jirsa, V.</dc:creator>
<dc:date>2020-10-23</dc:date>
<dc:identifier>doi:10.1101/2020.10.23.352021</dc:identifier>
<dc:title><![CDATA[A unified physiological framework of transitions between seizures, status epilepticus and depolarization block at the single neuron level]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-10-23</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.08.06.239533v1?rss=1">
<title>
<![CDATA[
Confidence-controlled Hebbian learning efficiently extracts category membership from stimuli encoded in view of a categorization task 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.08.06.239533v1?rss=1"
</link>
<description><![CDATA[
AO_SCPLOWBSTRACTC_SCPLOWIn experiments on perceptual decision-making, individuals learn a categorization task through trial-and-error protocols. We explore the capacity of a decision-making attractor network to learn a categorization task through reward-based, Hebbian type, modifications of the weights incoming from the stimulus encoding layer. For the latter, we assume a standard layer of a large number of stimulus specific neurons. Within the general framework of Hebbian learning, authors have hypothesized that the learning rate is modulated by the reward at each trial. Surprisingly, we find that, when the coding layer has been optimized in view of the categorization task, such reward-modulated Hebbian learning (RMHL) fails to extract efficiently the category membership. In a previous work we showed that the attractor neural networks nonlinear dynamics accounts for behavioral confidence in sequences of decision trials. Taking advantage of these findings, we propose that learning is controlled by confidence, as computed from the neural activity of the decision-making attractor network. Here we show that this confidence-controlled, reward-based, Hebbian learning efficiently extracts categorical information from the optimized coding layer. The proposed learning rule is local, and, in contrast to RMHL, does not require to store the average rewards obtained on previous trials. In addition, we find that the confidence-controlled learning rule achieves near optimal performance.
]]></description>
<dc:creator>Berlemont, K.</dc:creator>
<dc:creator>Nadal, J.-P.</dc:creator>
<dc:date>2020-08-07</dc:date>
<dc:identifier>doi:10.1101/2020.08.06.239533</dc:identifier>
<dc:title><![CDATA[Confidence-controlled Hebbian learning efficiently extracts category membership from stimuli encoded in view of a categorization task]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-08-07</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.09.463755v1?rss=1">
<title>
<![CDATA[
Parietal and Motor Cortical Dynamics Differentially Shape the Computation of Choice History Bias 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.09.463755v1?rss=1"
</link>
<description><![CDATA[
AO_SCPLOWBSTRACTC_SCPLOWHumans and other animals tend to repeat or alternate their previous choices, even when judging sensory stimuli presented in a random sequence. It is unclear if and how sensory, associative, and motor cortical circuits produce these idiosyncratic behavioral biases. Here, we combined behavioral modeling of a visual perceptual decision with magnetoencephalographic (MEG) analyses of neural dynamics, across multiple regions of the human cerebral cortex. We identified distinct history-dependent neural signals in motor and posterior parietal cortex. Gamma-band activity in parietal cortex tracked previous choices in a sustained fashion, and biased evidence accumulation toward choice repetition; sustained beta-band activity in motor cortex inversely reflected the previous motor action, and biased the accumulation starting point toward alternation. The parietal, not motor, signal mediated the impact of previous on current choice and reflected individual differences in choice repetition. In sum, parietal cortical signals seem to play a key role in shaping choice sequences.
]]></description>
<dc:creator>Urai, A. E.</dc:creator>
<dc:creator>Donner, T. H.</dc:creator>
<dc:date>2021-10-12</dc:date>
<dc:identifier>doi:10.1101/2021.10.09.463755</dc:identifier>
<dc:title><![CDATA[Parietal and Motor Cortical Dynamics Differentially Shape the Computation of Choice History Bias]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-10-12</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.11.05.369827v1?rss=1">
<title>
<![CDATA[
Modelling the neural code in large populations of correlated neurons 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.11.05.369827v1?rss=1"
</link>
<description><![CDATA[
Neurons respond selectively to stimuli, and thereby define a code that associates stimuli with population response patterns. Certain correlations within population responses (noise correlations) significantly impact the information content of the code, especially in large populations. Understanding the neural code thus necessitates response models that quantify the coding properties of modelled populations, while fitting large-scale neural recordings and capturing noise correlations. In this paper we propose a class of response model based on mixture models and exponential families. We show how to fit our models with expectation-maximization, and that they capture diverse variability and covariability in recordings of macaque primary visual cortex. We also show how they facilitate accurate Bayesian decoding, provide a closed-form expression for the Fisher information, and are compatible with theories of probabilistic population coding. Our framework could allow researchers to quantitatively validate the predictions of neural coding theories against both large-scale neural recordings and cognitive performance.
]]></description>
<dc:creator>Sokoloski, S.</dc:creator>
<dc:creator>Aschner, A.</dc:creator>
<dc:creator>Coen-Cagli, R.</dc:creator>
<dc:date>2020-11-06</dc:date>
<dc:identifier>doi:10.1101/2020.11.05.369827</dc:identifier>
<dc:title><![CDATA[Modelling the neural code in large populations of correlated neurons]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-11-06</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.01.462157v1?rss=1">
<title>
<![CDATA[
Context-dependent selectivity to natural scenes in the retina 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.01.462157v1?rss=1"
</link>
<description><![CDATA[
Retina ganglion cells extract specific features from natural scenes and send this information to the brain. In particular, they respond to local light increase (ON responses), and/or decrease (OFF). However, it is unclear if this ON-OFF selectivity, characterized with synthetic stimuli, is maintained when they are stimulated with natural scenes. Here we recorded the responses of ganglion cells of mice and axolotls to stimuli composed of natural images slightly perturbed by patterns of random noise to determine their selectivity during natural stimulation. The ON-OFF selectivity strongly depended on the natural image. A single ganglion cell can signal luminance increase for one natural image, and luminance decrease for another. Modeling and experiments showed that this was due to the non-linear combination of different pathways of the retinal circuit. Despite the versatility of the ON-OFF selectivity, a systematic analysis demonstrated that contrast was reliably encoded in these responses. Our perturbative approach thus uncovers the selectivity of retinal ganglion cells to more complex features than initially thought during natural scene stimulation.
]]></description>
<dc:creator>Goldin, M. A.</dc:creator>
<dc:creator>Lefebvre, B.</dc:creator>
<dc:creator>Virgili, S.</dc:creator>
<dc:creator>Ecker, A.</dc:creator>
<dc:creator>Mora, T.</dc:creator>
<dc:creator>Ferrari, U.</dc:creator>
<dc:creator>Marre, O.</dc:creator>
<dc:date>2021-10-03</dc:date>
<dc:identifier>doi:10.1101/2021.10.01.462157</dc:identifier>
<dc:title><![CDATA[Context-dependent selectivity to natural scenes in the retina]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-10-03</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.07.463105v1?rss=1">
<title>
<![CDATA[
Preserved motor representations after paralysis 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.07.463105v1?rss=1"
</link>
<description><![CDATA[
AO_SCPLOWBSTRACTC_SCPLOWNeural plasticity allows us to learn skills and incorporate new experiences. What happens when our lived experiences fundamentally change, such as after a severe injury? To address this question, we analyzed intracortical population activity in a tetraplegic adult as she controlled a virtual hand through a brain-computer interface (BCI). By attempting to move her fingers, she could accurately drive the corresponding virtual fingers. Neural activity during finger movements exhibited robust representational structure and dynamics that matched the representational structure, previously identified in able-bodied individuals. The finger representational structure was consistent during extended use, even though the structure contributed to BCI decoding errors. Our results suggest that motor representations are remarkably stable, even after complete paralysis. BCIs re-engage these preserved representations to restore lost motor functions.
]]></description>
<dc:creator>Guan, C.</dc:creator>
<dc:creator>Aflalo, T.</dc:creator>
<dc:creator>Zhang, C.</dc:creator>
<dc:creator>Rosario, E. R.</dc:creator>
<dc:creator>Pouratian, N.</dc:creator>
<dc:creator>Andersen, R. A.</dc:creator>
<dc:date>2021-10-09</dc:date>
<dc:identifier>doi:10.1101/2021.10.07.463105</dc:identifier>
<dc:title><![CDATA[Preserved motor representations after paralysis]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-10-09</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.02.18.431725v1?rss=1">
<title>
<![CDATA[
Ultrastructural analysis of dendritic spine necks reveals a continuum of spine morphologies 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.02.18.431725v1?rss=1"
</link>
<description><![CDATA[
Dendritic spines are membranous protrusions, with a bulbous head connected to the dendrite by a thin neck, and receive essentially all excitatory inputs in most mammalian neurons. Spines have a wide variety of morphologies that likely have a significant effect on their biochemical and electrical properties. The question of whether spines belong to distinct morphological or functional subtypes or constitute a continuum is still open. To discern this, it is important to measure spine necks objectively. Recent advances in electron microscopy enable automatic reconstructions of 3D spines with nanometer precision. Analyzing ultrastructural reconstructions from mouse neocortical neurons with computer vision algorithms, we demonstrate that the vast majority of spines can be rigorously separated into head and neck components. Analysis of the head and neck morphologies reveals a continuous distribution of parameters. The spine neck diameter, but not the neck length, was correlated with the head volume. Spines with larger head volumes often had a spine apparatus and pairs of spines in a post-synaptic cell contacted by the same axon had similar head volumes. Our data are consistent with a lack of morphological categories of spines and indicate that the morphologies of the spine neck and head are independently regulated. These results have repercussions for our understanding of the function of dendritic spines in neuronal circuits.
]]></description>
<dc:creator>Ofer, N.</dc:creator>
<dc:creator>Berger, D. R.</dc:creator>
<dc:creator>Kasthuri, N.</dc:creator>
<dc:creator>Lichtman, J. W.</dc:creator>
<dc:creator>Yuste, R.</dc:creator>
<dc:date>2021-02-18</dc:date>
<dc:identifier>doi:10.1101/2021.02.18.431725</dc:identifier>
<dc:title><![CDATA[Ultrastructural analysis of dendritic spine necks reveals a continuum of spine morphologies]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-02-18</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.08.31.458365v1?rss=1">
<title>
<![CDATA[
A comprehensive neural simulation of slow-wave sleep and highly responsive wakefulness dynamics 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.08.31.458365v1?rss=1"
</link>
<description><![CDATA[
Hallmarks of neural dynamics during healthy human brain states span spatial scales from neuromodulators acting on microscopic ion channels to macroscopic changes in communication between brain regions. Developing a scale-integrated understanding of neural dynamics has therefore remained challenging. Here, we perform the integration across scales using mean-field modeling of Adaptive Exponential (AdEx) neurons, explicitly incorporating intrinsic properties of excitatory and inhibitory neurons. We report that when AdEx mean-field neural populations are connected via structural tracts defined by the human connectome, macroscopic dynamics resembling human brain activity emerge. Importantly, the model can qualitatively and quantitatively account for properties of empirical spontaneous and stimulus-evoked dynamics in the space, time, phase, and frequency domains. Remarkably, the model also reproduces brain-wide enhanced responsiveness and capacity to encode information particularly during wake-like states, as quantified using the perturbational complexity index. The model was run using The Virtual Brain (TVB) simulator, and is open-access in EBRAINS. This approach not only provides a scale-integrated understanding of brain states and their underlying mechanisms, but also open access tools to investigate brain responsiveness, toward producing a more unified, formal understanding of experimental data from conscious and unconscious states, as well as their associated pathologies.
]]></description>
<dc:creator>Goldman, J. S.</dc:creator>
<dc:creator>Kusch, L.</dc:creator>
<dc:creator>Yalcinkaya, B. H.</dc:creator>
<dc:creator>Depannemaecker, D.</dc:creator>
<dc:creator>Nghiem, T.-A. E.</dc:creator>
<dc:creator>Jirsa, V.</dc:creator>
<dc:creator>Destexhe, A.</dc:creator>
<dc:date>2021-09-01</dc:date>
<dc:identifier>doi:10.1101/2021.08.31.458365</dc:identifier>
<dc:title><![CDATA[A comprehensive neural simulation of slow-wave sleep and highly responsive wakefulness dynamics]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-09-01</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.05.17.444510v1?rss=1">
<title>
<![CDATA[
Rat sensitivity to multipoint statistics is predicted by efficient coding of natural scenes 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.05.17.444510v1?rss=1"
</link>
<description><![CDATA[
Efficient processing of sensory data requires adapting the neuronal encoding strategy to the statistics of natural stimuli. Previously, in Hermundstad et al. 2014, we showed that local multipoint correlation patterns that are most variable in natural images are also the most perceptually salient for human observers, in a way that is compatible with the efficient coding principle. Understanding the neuronal mechanisms underlying such adaptation to image statistics will require performing invasive experiments that are impossible in humans. Therefore, it is important to understand whether a similar phenomenon can be detected in animal species that allow for powerful experimental manipulations, such as rodents. Here we selected four image statistics (from single- to four-point correlations) and trained four groups of rats to discriminate between white noise patterns and binary textures containing variable intensity levels of one of such statistics. We interpreted the resulting psychometric data with an ideal observer model, finding a sharp decrease in sensitivity from 2- to 4-point correlations and a further decrease from 4- to 3-point. This ranking fully reproduces the trend we previously observed in humans, thus extending a direct demonstration of efficient coding to a species where neuronal and developmental processes can be interrogated and causally manipulated.
]]></description>
<dc:creator>Caramellino, R.</dc:creator>
<dc:creator>Piasini, E.</dc:creator>
<dc:creator>Buccellato, A.</dc:creator>
<dc:creator>Carboncino, A.</dc:creator>
<dc:creator>Balasubramanian, V.</dc:creator>
<dc:creator>Zoccolan, D.</dc:creator>
<dc:date>2021-05-18</dc:date>
<dc:identifier>doi:10.1101/2021.05.17.444510</dc:identifier>
<dc:title><![CDATA[Rat sensitivity to multipoint statistics is predicted by efficient coding of natural scenes]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-05-18</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.02.16.430904v1?rss=1">
<title>
<![CDATA[
Predictive coding is a consequence of energy efficiency in recurrent neural networks 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.02.16.430904v1?rss=1"
</link>
<description><![CDATA[
Predictive coding represents a promising framework for understanding brain function. It postulates that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections, and learn to inhibit predictable sensory input. Moving beyond the view of purely top-down driven predictions, we furthermore demonstrate, via virtual lesioning experiments, that networks perform predictions on two timescales: fast lateral predictions among sensory units, and slower prediction cycles that integrate evidence over time.
]]></description>
<dc:creator>Ali, A.</dc:creator>
<dc:creator>Ahmad, N.</dc:creator>
<dc:creator>de Groot, E.</dc:creator>
<dc:creator>van Gerven, M. A. J.</dc:creator>
<dc:creator>Kietzmann, T. C.</dc:creator>
<dc:date>2021-02-16</dc:date>
<dc:identifier>doi:10.1101/2021.02.16.430904</dc:identifier>
<dc:title><![CDATA[Predictive coding is a consequence of energy efficiency in recurrent neural networks]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-02-16</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.05.25.445587v1?rss=1">
<title>
<![CDATA[
A neural network account of memory replay and knowledge consolidation 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.05.25.445587v1?rss=1"
</link>
<description><![CDATA[
Replay can consolidate memories through offline neural reactivation related to past experiences. Category knowledge is learned across multiple experiences, and its subsequent generalisation is promoted by consolidation and replay during rest and sleep. However, aspects of replay are difficult to determine from neuroimaging studies. We provided insights into category knowledge replay by simulating these processes in a neural network which approximated the roles of the human ventral visual stream and hippocampus. Generative replay, akin to imagining new category instances, facilitated generalisation to new experiences. Consolidation-related replay may therefore help to prepare us for the future as much as remember the past. Generative replay was more effective in later network layers functionally similar to the lateral occipital cortex than layers corresponding to early visual cortex, drawing a distinction between neural replay and its relevance to consolidation. Category replay was most beneficial for newly acquired knowledge, suggesting replay helps us adapt to changes in our environment. Finally, we present a novel mechanism for the observation that the brain selectively consolidates weaker information; a reinforcement learning process in which categories were replayed according to their contribution to network performance. This reinforces the idea of consolidation-related replay as an active rather than passive process.
]]></description>
<dc:creator>Barry, D. N.</dc:creator>
<dc:creator>Love, B. C.</dc:creator>
<dc:date>2021-05-26</dc:date>
<dc:identifier>doi:10.1101/2021.05.25.445587</dc:identifier>
<dc:title><![CDATA[A neural network account of memory replay and knowledge consolidation]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-05-26</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.11.13.381467v1?rss=1">
<title>
<![CDATA[
Stimulus-specific plasticity in human visual gamma-band activity and functional connectivity 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.11.13.381467v1?rss=1"
</link>
<description><![CDATA[
Under natural conditions, the visual system often sees a given input repeatedly. This provides an opportunity to optimize processing of the repeated stimuli. Stimulus repetition has been shown to strongly modulate neuronal-gamma band synchronization, yet crucial questions remained open. Here we used magnetoencephalography in 30 human subjects and find that gamma decreases across ~10 repetitions and then increases across further repetitions, revealing plastic changes of the activated neuronal circuits. Crucially, changes induced by one stimulus did not affect responses to other stimuli, demonstrating stimulus specificity. Changes partially persisted when the inducing stimulus was repeated after 25 minutes of intervening stimuli. They were strongest in early visual cortex and increased interareal feedforward influences. Our results suggest that early visual cortex gamma synchronization enables adaptive neuronal processing of recurring stimuli. These and previously reported changes might be due to an interaction of oscillatory dynamics with established synaptic plasticity mechanisms.
]]></description>
<dc:creator>Stauch, B. J.</dc:creator>
<dc:creator>Peter, A.</dc:creator>
<dc:creator>Schuler, H.</dc:creator>
<dc:creator>Fries, P.</dc:creator>
<dc:date>2020-11-13</dc:date>
<dc:identifier>doi:10.1101/2020.11.13.381467</dc:identifier>
<dc:title><![CDATA[Stimulus-specific plasticity in human visual gamma-band activity and functional connectivity]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-11-13</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.08.11.455910v1?rss=1">
<title>
<![CDATA[
Cryptographic-like mechanism allows scatter-hoarders to cache and retrieve their food secretly 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.08.11.455910v1?rss=1"
</link>
<description><![CDATA[
The brains extraordinary abilities are often associated with its ability to learn and adapt. But memory and plasticity have their limitations, especially when faced with tasks such as retrieving thousands of food items such as in the case of scatter-hoarding animals. Here, we suggest a brain mechanism that works by utilizing cryptographic principles in lieu of plasticity. Rather than memorizing the locations of their caches, as previously suggested, we propose that cache-hoarding animals use a single cryptographic-like mechanism for both caching and retrieval. The mathematical model we developed functions similarly to hippocampal spatial cells, which respond to an animals positional attention. We know that the region that activates each spatial cell remains consistent across subsequent visits to the same area but not between areas. This remapping, combined with the uniqueness of cognitive maps, produces a persistent crypto-hash function for both food caching and retrieval. We show that our model is consistent with previous observations, such as animals ability to prioritize food items that are perishable or by their nutritional value. The model makes several measurable predictions regarding scattered hoarding and what factors can limit an animals retrieval success. Finally, while focusing here on scatter-hoarding, the mechanism we present might be utilized by the brain in other ways providing essentially infinite retention capacity for structured data.
]]></description>
<dc:creator>Forkosh, O.</dc:creator>
<dc:date>2021-08-11</dc:date>
<dc:identifier>doi:10.1101/2021.08.11.455910</dc:identifier>
<dc:title><![CDATA[Cryptographic-like mechanism allows scatter-hoarders to cache and retrieve their food secretly]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-08-11</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.01.30.428936v1?rss=1">
<title>
<![CDATA[
Separated and overlapping neural coding of face and body identity 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.01.30.428936v1?rss=1"
</link>
<description><![CDATA[
Recognising a persons identity often relies on face and body information, and is tolerant to changes in low-level visual input (e.g. viewpoint changes). Previous studies have suggested that face identity is disentangled from low-level visual input in the anterior face-responsive regions. It remains unclear which regions disentangle body identity from variations in viewpoint, and whether face and body identity are encoded separately or combined into a coherent person identity representation. We trained participants to recognize three identities, and then recorded their brain activity using fMRI while they viewed face and body images of the three identities from different viewpoints. Participants task was to respond to either the stimulus identity or viewpoint. We found consistent decoding of body identity across viewpoint in the fusiform body area, right anterior temporal cortex, middle frontal gyrus and right insula. This finding demonstrates a similar function of fusiform and anterior temporal cortex for bodies as has previously been shown for faces, suggesting these regions may play a general role in extracting high-level identity information. Moreover, we could decode identity across neural activity evoked by faces and bodies in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, revealing a distributed network that encodes person identity abstractly. Lastly, identity decoding was consistently better when participants attended to identity, indicating that attention to identity enhances its neural representation. These results offer new insights into how the brain develops an abstract neural coding of person identity, shared by faces and bodies.
]]></description>
<dc:creator>Foster, C.</dc:creator>
<dc:creator>Zhao, M.</dc:creator>
<dc:creator>Bolkart, T.</dc:creator>
<dc:creator>Black, M. J.</dc:creator>
<dc:creator>Bartels, A.</dc:creator>
<dc:creator>Buelthoff, I.</dc:creator>
<dc:date>2021-02-01</dc:date>
<dc:identifier>doi:10.1101/2021.01.30.428936</dc:identifier>
<dc:title><![CDATA[Separated and overlapping neural coding of face and body identity]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-02-01</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.06.28.450213v1?rss=1">
<title>
<![CDATA[
Directly interfacing brain and deep networks exposes non-hierarchical visual processing 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.06.28.450213v1?rss=1"
</link>
<description><![CDATA[
One reason the mammalian visual system is viewed as hierarchical, such that successive stages of processing contain ever higher-level information, is because of functional correspondences with deep convolutional neural networks (DCNNs). However, these correspondences between brain and model activity involve shared, not task-relevant, variance. We propose a stricter test of correspondence: If a DCNN layer corresponds to a brain region, then replacing model activity with brain activity should successfully drive the DCNNs object recognition decision. Using this approach on three datasets, we found all regions along the ventral visual stream best corresponded with later model layers, indicating all stages of processing contained higher-level information about object category. Time course analyses suggest long-range recurrent connections transmit object class information from late to early visual areas.
]]></description>
<dc:creator>Sexton, N. J.</dc:creator>
<dc:creator>Love, B. C.</dc:creator>
<dc:date>2021-06-29</dc:date>
<dc:identifier>doi:10.1101/2021.06.28.450213</dc:identifier>
<dc:title><![CDATA[Directly interfacing brain and deep networks exposes non-hierarchical visual processing]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-06-29</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.01.07.425323v1?rss=1">
<title>
<![CDATA[
Online learning for orientation estimation during translation in an insect ring attractor network 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.01.07.425323v1?rss=1"
</link>
<description><![CDATA[
Insect neural systems are a promising source of inspiration for new algorithms for navigation, especially on low size, weight, and power platforms. There have been unprecedented recent neuroscience breakthroughs with Drosophila in behavioral and neural imaging experiments as well as the mapping of detailed connectivity of neural structures. General mechanisms for learning orientation in the central complex (CX) of Drosophila have been investigated previously; however, it is unclear how these underlying mechanisms extend to cases where there is translation through an environment (beyond only rotation), which is critical for navigation in robotic systems. Here, we develop a CX neural connectivity-constrained model that performs sensor fusion, as well as unsupervised learning of visual features for path integration; we demonstrate the viability of this circuit for use in robotic systems in simulated and physical environments. Furthermore, we propose a theoretical understanding of how distributed online unsupervised network weight modification can be leveraged for learning in a trajectory through an environment by minimizing of orientation estimation error. Overall, our results here may enable a new class of CX-derived low power robotic navigation algorithms and lead to testable predictions to inform future neuroscience experiments.

SummaryAn insect neural connectivity-constrained model performs sensor fusion and online learning for orientation estimation.
]]></description>
<dc:creator>Robinson, B. S.</dc:creator>
<dc:creator>Norman-Tenazas, R.</dc:creator>
<dc:creator>Cervantes, M.</dc:creator>
<dc:creator>Symonette, D.</dc:creator>
<dc:creator>Johnson, E. C.</dc:creator>
<dc:creator>Joyce, J.</dc:creator>
<dc:creator>Rivlin, P. K.</dc:creator>
<dc:creator>Hwang, G.</dc:creator>
<dc:creator>Zhang, K.</dc:creator>
<dc:creator>Gray-Roncal, W. R.</dc:creator>
<dc:date>2021-01-07</dc:date>
<dc:identifier>doi:10.1101/2021.01.07.425323</dc:identifier>
<dc:title><![CDATA[Online learning for orientation estimation during translation in an insect ring attractor network]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-01-07</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.09.25.461804v1?rss=1">
<title>
<![CDATA[
Flygenvectors: The spatial and temporal structure of neural activity across the fly brain 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.09.25.461804v1?rss=1"
</link>
<description><![CDATA[
What are the spatial and temporal scales of brainwide neuronal activity, and how do activities at different scales interact? We used SCAPE microscopy to image a large fraction of the central brain of adult Drosophila melanogaster with high spatiotemporal resolution while flies engaged in a variety of behaviors, including running, grooming and flailing. This revealed neural representations of behavior on multiple spatial and temporal scales. The activity of most neurons across the brain correlated (or, in some cases, anticorrelated) with running and flailing over timescales that ranged from seconds to almost a minute. Grooming elicited a much weaker global response. Although these behaviors accounted for a large fraction of neural activity, residual activity not directly correlated with behavior was high dimensional. Many dimensions of the residual activity reflect the activity of small clusters of spatially organized neurons that may correspond to genetically defined cell types. These clusters participate in the global dynamics, indicating that neural activity reflects a combination of local and broadly distributed components. This suggests that microcircuits with highly specified functions are provided with knowledge of the larger context in which they operate, conferring a useful balance of specificity and flexibility.
]]></description>
<dc:creator>Schaffer, E. S.</dc:creator>
<dc:creator>Mishra, N.</dc:creator>
<dc:creator>Whiteway, M. R.</dc:creator>
<dc:creator>Li, W.</dc:creator>
<dc:creator>Vancura, M. B.</dc:creator>
<dc:creator>Freedman, J.</dc:creator>
<dc:creator>Patel, K. B.</dc:creator>
<dc:creator>Voleti, V.</dc:creator>
<dc:creator>Paninski, L.</dc:creator>
<dc:creator>Hillman, E. M. C.</dc:creator>
<dc:creator>Abbott, L.</dc:creator>
<dc:creator>Axel, R.</dc:creator>
<dc:date>2021-09-26</dc:date>
<dc:identifier>doi:10.1101/2021.09.25.461804</dc:identifier>
<dc:title><![CDATA[Flygenvectors: The spatial and temporal structure of neural activity across the fly brain]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-09-26</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.09.09.459651v1?rss=1">
<title>
<![CDATA[
Structured random receptive fields enable informative sensory encodings 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.09.09.459651v1?rss=1"
</link>
<description><![CDATA[
Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parameterized distributions and demonstrate this model in two sensory modalities using data from insect mechanosensors and mammalian primary visual cortex. Our approach leads to a significant theoretical connection between the foundational concepts of receptive fields and random features, a leading theory for understanding artificial neural networks. The modeled neurons perform a randomized wavelet transform on inputs, which removes high frequency noise and boosts the signal. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.
]]></description>
<dc:creator>Pandey, B.</dc:creator>
<dc:creator>Pachitariu, M.</dc:creator>
<dc:creator>Brunton, B. W.</dc:creator>
<dc:creator>Harris, K. D.</dc:creator>
<dc:date>2021-09-11</dc:date>
<dc:identifier>doi:10.1101/2021.09.09.459651</dc:identifier>
<dc:title><![CDATA[Structured random receptive fields enable informative sensory encodings]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-09-11</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.03.30.437555v1?rss=1">
<title>
<![CDATA[
A novel theoretical framework for simultaneous measurement of excitatory and inhibitory conductances 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.03.30.437555v1?rss=1"
</link>
<description><![CDATA[
Firing of neurons throughout the brain is determined by the precise relations between excitatory and inhibitory inputs and disruption of their balance underlies many psychiatric diseases. Whether or not these inputs covary over time or between repeated stimuli remains unclear due to the lack of experimental methods for measuring both inputs simultaneously. We developed a new analytical framework for instantaneous and simultaneous measurements of both the excitatory and inhibitory neuronal inputs during a single trial under current clamp recording. This can be achieved by injecting a current composed of two high frequency sinusoidal components followed by analytical extraction of the conductances. We demonstrate the ability of this method to measure both inputs in a single trial under realistic recording constraints and from morphologically realistic CA1 pyramidal model cells. Experimental implementation of our new method will facilitate the understanding of fundamental questions about the health and disease of the nervous system.

ClassificationSystem Neuroscience, Cellular and Molecular Neuroscience
]]></description>
<dc:creator>Muller-Komorowska, D.</dc:creator>
<dc:creator>Parabucki, A.</dc:creator>
<dc:creator>Elyasaf, G.</dc:creator>
<dc:creator>Katz, Y.</dc:creator>
<dc:creator>Beck, H.</dc:creator>
<dc:creator>Lampl, I.</dc:creator>
<dc:date>2021-03-30</dc:date>
<dc:identifier>doi:10.1101/2021.03.30.437555</dc:identifier>
<dc:title><![CDATA[A novel theoretical framework for simultaneous measurement of excitatory and inhibitory conductances]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-03-30</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.11.29.403089v1?rss=1">
<title>
<![CDATA[
Transient neuronal suppression for exploitation of new sensory evidence 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.11.29.403089v1?rss=1"
</link>
<description><![CDATA[
In noisy but stationary environments, decisions should be based on the temporal integration of sequentially sampled evidence. This strategy has been supported by many behavioral studies and is qualitatively consistent with neural activity in multiple brain areas. By contrast, decision-making in the face of non-stationary sensory evidence remains poorly understood. Here, we trained monkeys to identify the dominant color of a dynamically refreshed bicolor patch that becomes informative after a variable delay. Animals' behavioral responses were briefly suppressed after evidence changes, and many neurons in the frontal eye field displayed a corresponding dip in activity at this time, similar to that frequently observed after stimulus onset. Generalized drift-diffusion models revealed consistency of behavior and neural activity with brief suppression of motor output, but not with pausing or resetting of evidence accumulation. These results suggest that momentary arrest of motor preparation is an important component of dynamic perceptual decision making.
]]></description>
<dc:creator>Shinn, M.</dc:creator>
<dc:creator>Lee, D.</dc:creator>
<dc:creator>Murray, J. D.</dc:creator>
<dc:creator>Seo, H.</dc:creator>
<dc:date>2020-11-30</dc:date>
<dc:identifier>doi:10.1101/2020.11.29.403089</dc:identifier>
<dc:title><![CDATA[Transient neuronal suppression for exploitation of new sensory evidence]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-11-30</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.03.12.435035v1?rss=1">
<title>
<![CDATA[
Learning accurate path integration in a ring attractor model of the head direction system 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.03.12.435035v1?rss=1"
</link>
<description><![CDATA[
Ring attractor models for angular path integration have recently received strong experimental support. To function as integrators, head-direction (HD) circuits require precisely tuned connectivity, but it is currently unknown how such tuning could be achieved. Here, we propose a network model in which a local, biologically plausible learning rule adjusts synaptic efficacies during development, guided by supervisory allothetic cues. Applied to the Drosophila HD system, the model learns to path-integrate accurately and develops a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Our model predicts that path integration requires supervised learning during a developmental phase. The model setting is general and also applies to architectures that lack the physical topography of a ring, like the mammalian HD system.
]]></description>
<dc:creator>Vafidis, P.</dc:creator>
<dc:creator>Owald, D.</dc:creator>
<dc:creator>D'Albis, T.</dc:creator>
<dc:creator>Kempter, R.</dc:creator>
<dc:date>2021-03-12</dc:date>
<dc:identifier>doi:10.1101/2021.03.12.435035</dc:identifier>
<dc:title><![CDATA[Learning accurate path integration in a ring attractor model of the head direction system]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-03-12</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.04.05.438491v1?rss=1">
<title>
<![CDATA[
Dynamic task-belief is an integral part of decision-making 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.04.05.438491v1?rss=1"
</link>
<description><![CDATA[
Natural decisions involve two seemingly separable processes: inferring the relevant task (task-belief) and performing the believed-relevant task. The assumed separability has led to the traditional practice of studying task-switching and perceptual decision-making individually. Here, we used a novel paradigm to manipulate and measure macaque monkeys task-belief, and demonstrated inextricable neuronal links between flexible task-belief and perceptual decision-making. We showed that in animals, but not artificial networks that performed as well or better than the animals, stronger task-belief is associated with better perception. Correspondingly, recordings from neuronal populations in cortical areas 7a and V1 revealed that stronger task-belief is associated with better discriminability of the believed-relevant but not the believed-irrelevant feature. Perception also impacts belief updating: noise fluctuations in V1 help explain how task-belief is updated. Our results demonstrate that complex tasks and multi-area recordings can reveal fundamentally new principles of how biology affects behavior in health and disease.
]]></description>
<dc:creator>Xue, C.</dc:creator>
<dc:creator>Kramer, L. E.</dc:creator>
<dc:creator>Cohen, M. R.</dc:creator>
<dc:date>2021-04-06</dc:date>
<dc:identifier>doi:10.1101/2021.04.05.438491</dc:identifier>
<dc:title><![CDATA[Dynamic task-belief is an integral part of decision-making]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-04-06</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.08.18.456777v1?rss=1">
<title>
<![CDATA[
How the insect central complex could coordinate multimodal navigation 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.08.18.456777v1?rss=1"
</link>
<description><![CDATA[
The central complex of the insect midbrain is thought to coordinate insect guidance strategies. Computational models can account for specific behaviours but their applicability across sensory and task domains remains untested. Here we assess the capacity of our previous model explaining visual navigation to generalise to olfactory navigation and its coordination with other guidance in flies and ants. We show that fundamental to this capacity is the use of a biologically-realistic neural copy-and-shift mechanism that ensures sensory information is presented in a format compatible with the insect steering circuit regardless of its source. Moreover, the same mechanism is shown to transfer cues from unstable/egocentric to stable/geocentric frames of reference providing a first account of the mechanism by which foraging insects robustly recover from environmental disturbances. We propose that these circuits can be flexibly repurposed by different insect navigators to address their unique ecological needs.
]]></description>
<dc:creator>Sun, X.</dc:creator>
<dc:creator>Yue, S.</dc:creator>
<dc:creator>Mangan, M.</dc:creator>
<dc:date>2021-08-19</dc:date>
<dc:identifier>doi:10.1101/2021.08.18.456777</dc:identifier>
<dc:title><![CDATA[How the insect central complex could coordinate multimodal navigation]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-08-19</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.03.02.433627v1?rss=1">
<title>
<![CDATA[
Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue renders neural networks more susceptible to sudden changes in synchrony 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.03.02.433627v1?rss=1"
</link>
<description><![CDATA[
A myriad of pathological changes associated with epilepsy can be recast as decreases in cell and circuit heterogeneity. We thus propose recontextualizing epileptogenesis as a process where reduction in cellular heterogeneity in part, renders neural circuits less resilient to seizure. By comparing patch clamp recordings from human layer 5 (L5) cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we demonstrate significantly decreased biophysical heterogeneity in seizure generating areas. Implemented computationally, this renders model neural circuits prone to sudden transitions into synchronous states with increased firing activity, paralleling ictogenesis. This computational work also explains the surprising finding of significantly decreased excitability in the population activation functions of neurons from epileptogenic tissue. Finally, mathematical analyses reveal a unique bifurcation structure arising only with low heterogeneity and associated with seizure-like dynamics. Taken together, this work provides experimental, computational, and mathematical support for the theory that ictogenic dynamics accompany a reduction in biophysical heterogeneity.
]]></description>
<dc:creator>Rich, S.</dc:creator>
<dc:creator>Moradi Chameh, H.</dc:creator>
<dc:creator>Lefebvre, J.</dc:creator>
<dc:creator>Valiante, T. A.</dc:creator>
<dc:date>2021-03-03</dc:date>
<dc:identifier>doi:10.1101/2021.03.02.433627</dc:identifier>
<dc:title><![CDATA[Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue renders neural networks more susceptible to sudden changes in synchrony]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-03-03</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.03.12.435177v1?rss=1">
<title>
<![CDATA[
Experience-related remapping of temporal encoding by striatal ensembles. 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.03.12.435177v1?rss=1"
</link>
<description><![CDATA[
Temporal control of action is key for a broad range of behaviors and is disrupted in human diseases such as Parkinsons disease and schizophrenia. A brain structure that is critical for temporal control is the dorsal striatum. Experience and learning can influence dorsal striatal neuronal activity, but it is unknown how these neurons change with experience in contexts which require precise temporal control of movement. We investigated this question by recording from medium-spiny neurons (MSNs) in the dorsal striatum of mice as they gained experience controlling their actions in time. We leveraged an interval timing task optimized for mice which required them to "switch" response ports after enough time had passed without receiving a reward. We report three main results. First, we found that time-related ramping activity and response-related activity increased with more experience. Second, temporal decoding by MSN ensembles improved with experience and was predominantly driven by time-related ramping activity. Finally, we found that some MSNs had differential modulation on error trials. These findings enhance our understanding of dorsal striatal temporal processing by demonstrating how MSN ensembles can evolve with experience. Our results can be linked to temporal habituation and illuminate striatal flexibility during interval timing, which may be relevant for human disease.
]]></description>
<dc:creator>Bruce, R.</dc:creator>
<dc:creator>Weber, M.</dc:creator>
<dc:creator>Volkman, R.</dc:creator>
<dc:creator>Oya, M.</dc:creator>
<dc:creator>Emmons, E.</dc:creator>
<dc:creator>Kim, Y.</dc:creator>
<dc:creator>Narayanan, N.</dc:creator>
<dc:date>2021-03-12</dc:date>
<dc:identifier>doi:10.1101/2021.03.12.435177</dc:identifier>
<dc:title><![CDATA[Experience-related remapping of temporal encoding by striatal ensembles.]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-03-12</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.21.465346v1?rss=1">
<title>
<![CDATA[
Structure in motion: visual motion perception as online hierarchical inference 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.21.465346v1?rss=1"
</link>
<description><![CDATA[
Identifying the structure of motion relations in the environment is critical for navigation, tracking, prediction, and pursuit. Yet, little is known about the mental and neural computations that allow the visual system to infer this structure online from a volatile stream of visual information. We propose online hierarchical Bayesian inference as a principled solution for how the brain might solve this complex perceptual task. We derive an online Expectation-Maximization algorithm that explains human percepts qualitatively and quantitatively for a diverse set of stimuli, covering classical psychophysics experiments, ambiguous motion scenes, and illusory motion displays. We thereby identify normative explanations for the origin of human motion structure perception and make testable predictions for new psychophysics experiments. The proposed online hierarchical inference model furthermore affords a neural network implementation which shares properties with motion-sensitive cortical areas and motivates a novel class of experiments to reveal the neural representations of latent structure.
]]></description>
<dc:creator>Bill, J.</dc:creator>
<dc:creator>Gershman, S. J.</dc:creator>
<dc:creator>Drugowitsch, J.</dc:creator>
<dc:date>2021-10-23</dc:date>
<dc:identifier>doi:10.1101/2021.10.21.465346</dc:identifier>
<dc:title><![CDATA[Structure in motion: visual motion perception as online hierarchical inference]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-10-23</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.08.30.458264v1?rss=1">
<title>
<![CDATA[
Coordinated drift of receptive fields during noisy representation learning 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.08.30.458264v1?rss=1"
</link>
<description><![CDATA[
Long-term memories and learned behavior are conventionally associated with stable neuronal representations. However, recent experiments showed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational "drift" naturally leads to questions about its causes, dynamics, and functions. Here, we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning, which optimize similarity matching objectives, and, when neural outputs are constrained to be nonnegative, learn localized receptive fields (RFs) that tile the stimulus manifold. We find that the drifting RFs of individual neurons can be characterized by a coordinated random walk, with the effective diffusion constants depending on various parameters such as learning rate, noise amplitude, and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates recent experimental observations in hippocampus and posterior parietal cortex, and makes testable predictions that can be probed in future experiments.
]]></description>
<dc:creator>Qin, S.</dc:creator>
<dc:creator>Farashahi, S.</dc:creator>
<dc:creator>Lipshutz, D.</dc:creator>
<dc:creator>Sengupta, A. M.</dc:creator>
<dc:creator>Chklovskii, D. B.</dc:creator>
<dc:creator>Pehlevan, C.</dc:creator>
<dc:date>2021-09-01</dc:date>
<dc:identifier>doi:10.1101/2021.08.30.458264</dc:identifier>
<dc:title><![CDATA[Coordinated drift of receptive fields during noisy representation learning]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-09-01</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/080374v1?rss=1">
<title>
<![CDATA[
Human anterolateral entorhinal cortex volumes are associated with preclinical cognitive decline 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/080374v1?rss=1"
</link>
<description><![CDATA[
We investigated whether older adults without subjective memory complaints, but who present with cognitive decline in the laboratory, demonstrate atrophy in medial temporal lobe (MTL) subregions associated with Alzheimer's disease. Forty community-dwelling older adults were categorized based on Montreal Cognitive Assessment (MoCA) performance. Total grey/white matter, cerebrospinal fluid, and white matter hyperintensity load were quantified from whole-brain T1-weighted and FLAIR magnetic resonance imaging scans, while hippocampal subfields and MTL cortical subregion volumes (CA1, dentate gyrus/CA2/3, subiculum, anterolateral and posteromedial entorhinal, perirhinal, and parahippocampal cortices) were quantified using high-resolution T2-weighted scans. Cognitive status was evaluated using standard neuropsychological assessments. No significant differences were found in the whole-brain measures. However, MTL volumetry revealed that anterolateral entorhinal cortex (alERC) volume -- the same region in which Alzheimer's pathology originates -- was strongly associated with MoCA performance. This is the first study to demonstrate that alERC volume is related to cognitive decline in preclinical, community-dwelling older adults.
]]></description>
<dc:creator>Rosanna K Olsen</dc:creator>
<dc:creator>Lok-Kin Yeung</dc:creator>
<dc:creator>Alix Noly-Gandon</dc:creator>
<dc:creator>Maria C D'Angelo</dc:creator>
<dc:creator>Arber Kacollja</dc:creator>
<dc:creator>Victoria Smith</dc:creator>
<dc:creator>Jennifer D Ryan</dc:creator>
<dc:creator>Morgan D Barense</dc:creator>
<dc:creator></dc:creator>
<dc:date>2016-10-12</dc:date>
<dc:identifier>doi:10.1101/080374</dc:identifier>
<dc:title><![CDATA[Human anterolateral entorhinal cortex volumes are associated with preclinical cognitive decline]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2016-10-12</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.09.03.458628v1?rss=1">
<title>
<![CDATA[
Where is all the nonlinearity: flexible nonlinear modeling of behaviorally relevant neural dynamics using recurrent neural networks 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.09.03.458628v1?rss=1"
</link>
<description><![CDATA[
Understanding the dynamical transformation of neural activity to behavior requires modeling this transformation while both dissecting its potential nonlinearities and dissociating and preserving its nonlinear behaviorally relevant neural dynamics, which remain unaddressed. We present RNN PSID, a nonlinear dynamic modeling method that enables flexible dissection of nonlinearities, dissociation and preferential learning of neural dynamics relevant to specific behaviors, and causal decoding. We first validate RNN PSID in simulations and then use it to investigate nonlinearities in monkey spiking and LFP activity across four tasks and different brain regions. Nonlinear RNN PSID successfully dissociated and preserved nonlinear behaviorally relevant dynamics, thus outperforming linear and non-preferential nonlinear learning methods in behavior decoding while reaching similar neural prediction. Strikingly, dissecting the nonlinearities with RNN PSID revealed that consistently across all tasks, summarizing the nonlinearity only in the mapping from the latent dynamics to behavior was largely sufficient for predicting behavior and neural activity. RNN PSID provides a novel tool to reveal new characteristics of nonlinear neural dynamics underlying behavior.
]]></description>
<dc:creator>Sani, O. G.</dc:creator>
<dc:creator>Pesaran, B.</dc:creator>
<dc:creator>Shanechi, M. M.</dc:creator>
<dc:date>2021-09-06</dc:date>
<dc:identifier>doi:10.1101/2021.09.03.458628</dc:identifier>
<dc:title><![CDATA[Where is all the nonlinearity: flexible nonlinear modeling of behaviorally relevant neural dynamics using recurrent neural networks]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-09-06</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.03.15.434192v1?rss=1">
<title>
<![CDATA[
Bump attractor dynamics underlying stimulus integration in perceptual estimation tasks 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.03.15.434192v1?rss=1"
</link>
<description><![CDATA[
Perceptual decision and continuous stimulus estimation tasks involve making judgments based on accumulated sensory evidence. Network models of evidence integration usually rely on competition between neural populations each encoding a discrete categorical choice and do not maintain information that is necessary for a continuous perceptual judgement. Here, we show that a continuous attractor network can integrate a circular stimulus feature and track the stimulus average in the phase of its activity bump. We show analytically that the network can compute the running average of the stimulus almost optimally, and that the nonlinear internal dynamics affect the temporal weighting of sensory evidence. Whether the network shows early (primacy), uniform or late (recency) weighting depends on the relative strength of the stimuli compared to the bumps amplitude and initial state. The global excitatory drive, a single model parameter, modulates the specific relation between internal dynamics and sensory inputs. We show that this can account for the heterogeneity of temporal weighting profiles and reaction times observed in humans integrating a stream of oriented stimulus frames. Our findings point to continuous attractor dynamics as a plausible mechanism underlying stimulus integration in perceptual estimation tasks.
]]></description>
<dc:creator>Esnaola-Acebes, J. M.</dc:creator>
<dc:creator>Roxin, A.</dc:creator>
<dc:creator>Wimmer, K.</dc:creator>
<dc:date>2021-03-16</dc:date>
<dc:identifier>doi:10.1101/2021.03.15.434192</dc:identifier>
<dc:title><![CDATA[Bump attractor dynamics underlying stimulus integration in perceptual estimation tasks]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-03-16</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.08.22.457198v1?rss=1">
<title>
<![CDATA[
Stretching and squeezing of neuronal log firing rate distribution by psychedelic and intrinsic brain state transitions 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.08.22.457198v1?rss=1"
</link>
<description><![CDATA[
How psychedelic drugs change the activity of cortical neuronal populations and whether such changes are specific to transition into the psychedelic brain state or shared with other brain state transitions is not well understood. Here, we used Neuropixels probes to record from large populations of neurons in prefrontal cortex of mice given the psychedelic drug TCB-2. Drug ingestion significantly stretched the distribution of log firing rates of the population of recorded neurons. This phenomenon was previously observed across transitions between sleep and wakefulness, which suggested that stretching of the log-rate distribution can be triggered by different kinds of brain state transitions and prompted us to examine it in more detail. We found that modulation of the width of the log-rate distribution of a neuronal population occurred in multiple areas of the cortex and in the hippocampus even in awake drug-free mice, driven by intrinsic fluctuations in their arousal level. Arousal, however, did not explain the stretching of the log-rate distribution by TCB-2. In both psychedelic and naturally occurring brain state transitions, the stretching or squeezing of the log-rate distribution of an entire neuronal population reflected concomitant changes in two subpopulations, with one subpopulation undergoing a downregulation and often also stretching of its neurons log-rate distribution, while the other subpopulation undergoes upregulation and often also a squeeze of its log-rate distribution. In both subpopulations, the stretching and squeezing were a signature of a greater relative impact of the brain state transition on the rates of the slow-firing neurons. These findings reveal a generic pattern of reorganisation of neuronal firing rates by different kinds of brain state transitions.
]]></description>
<dc:creator>Dearnley, B.</dc:creator>
<dc:creator>Dervinis, M.</dc:creator>
<dc:creator>Shaw, M.</dc:creator>
<dc:creator>Okun, M.</dc:creator>
<dc:date>2021-08-23</dc:date>
<dc:identifier>doi:10.1101/2021.08.22.457198</dc:identifier>
<dc:title><![CDATA[Stretching and squeezing of neuronal log firing rate distribution by psychedelic and intrinsic brain state transitions]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-08-23</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.25.465651v1?rss=1">
<title>
<![CDATA[
Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.25.465651v1?rss=1"
</link>
<description><![CDATA[
Biological neurons integrate their inputs on dendrites using a diverse range of non-linear functions. However the majority of artificial neural networks (ANNs) ignore biological neurons structural complexity and instead use simplified point neurons. Can dendritic properties add value to ANNs? In this paper we investigate this question in the context of continual learning, an area where ANNs suffer from catastrophic forgetting (i.e., ANNs are unable to learn new information without erasing what they previously learned). We propose that dendritic properties can help neurons learn context-specific patterns and invoke highly sparse context-specific subnetworks. Within a continual learning scenario, these task-specific subnetworks interfere minimally with each other and, as a result, the network remembers previous tasks significantly better than standard ANNs. We then show that by combining dendritic networks with Synaptic Intelligence (a biologically motivated method for complex weights) we can achieve significant resilience to catastrophic forgetting, more than either technique can achieve on its own. Our neuron model is directly inspired by the biophysics of sustained depolarization following dendritic NMDA spikes. Our research sheds light on how biological properties of neurons can be used to solve scenarios that are typically impossible for traditional ANNs to solve.
]]></description>
<dc:creator>Grewal, K.</dc:creator>
<dc:creator>Forest, J.</dc:creator>
<dc:creator>Cohen, B.</dc:creator>
<dc:creator>Ahmad, S.</dc:creator>
<dc:date>2021-10-26</dc:date>
<dc:identifier>doi:10.1101/2021.10.25.465651</dc:identifier>
<dc:title><![CDATA[Going Beyond the Point Neuron: Active Dendrites and Sparse Representations for Continual Learning]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-10-26</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.11.463861v1?rss=1">
<title>
<![CDATA[
A reservoir of timescales in random neural networks 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.11.463861v1?rss=1"
</link>
<description><![CDATA[
The temporal activity of many biological systems, including neural circuits, exhibits fluctuations simultaneously varying over a large range of timescales. The mechanisms leading to this temporal heterogeneity are yet unknown. Here we show that random neural networks endowed with a distribution of self-couplings, representing functional neural clusters of different sizes, generate multiple timescales of activity spanning several orders of magnitude. When driven by a time-dependent broadband input, slow and fast neural clusters preferentially entrain slow and fast spectral components of the input, respectively, suggesting a potential mechanism for spectral demixing in cortical circuits.
]]></description>
<dc:creator>Stern, M.</dc:creator>
<dc:creator>Istrate, N.</dc:creator>
<dc:creator>Mazzucato, L.</dc:creator>
<dc:date>2021-10-12</dc:date>
<dc:identifier>doi:10.1101/2021.10.11.463861</dc:identifier>
<dc:title><![CDATA[A reservoir of timescales in random neural networks]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-10-12</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.02.28.970053v1?rss=1">
<title>
<![CDATA[
Temporal learning among prefrontal and striatal ensembles 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.02.28.970053v1?rss=1"
</link>
<description><![CDATA[
Behavioral flexibility requires the prefrontal cortex and striatum. Here, we investigate neuronal ensembles in the medial frontal cortex (MFC) and the dorsomedial striatum (DMS) during one form of behavioral flexibility: learning a new temporal interval. We studied corticostriatal neuronal activity as rodents trained to respond after a 12-second fixed interval (FI12) learned to respond at a shorter 3-second fixed interval (FI3). On FI12 trials, we discovered time-related ramping was reduced in the MFC but not in the DMS in two-interval vs. one-interval sessions. We also found that more DMS neurons than MFC neurons exhibited differential interval-related activity on the first day of two-interval performance. Finally, MFC and DMS ramping was similar with successive days of two-interval performance but DMS temporal decoding increased on FI3 trials. These data suggest that the MFC and DMS play distinct roles during temporal learning and provide insight into corticostriatal circuits.
]]></description>
<dc:creator>Emmons, E. B.</dc:creator>
<dc:creator>Chiuffa Tunes, G.</dc:creator>
<dc:creator>Choi, J.</dc:creator>
<dc:creator>Bruce, R. A.</dc:creator>
<dc:creator>Weber, M.</dc:creator>
<dc:creator>Kim, Y.</dc:creator>
<dc:creator>Narayanan, N.</dc:creator>
<dc:date>2020-03-02</dc:date>
<dc:identifier>doi:10.1101/2020.02.28.970053</dc:identifier>
<dc:title><![CDATA[Temporal learning among prefrontal and striatal ensembles]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-03-02</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.01.12.426428v1?rss=1">
<title>
<![CDATA[
Informative neural representations of unseen objects during higher-order processing in human brains and deep artificial networks 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.01.12.426428v1?rss=1"
</link>
<description><![CDATA[
A framework to pinpoint the scope of unconscious processing is critical to improve our models of visual consciousness. Previous research observed brain signatures of unconscious processing in visual cortex but these were not reliably identified. Further, whether unconscious content is represented in high-level stages of the ventral visual stream and linked parieto-frontal areas remains unknown. Using a within-subject, high-precision fMRI approach, we show that unconscious contents can be decoded from multivoxel patterns that are highly distributed alongside the ventral visual pathway and also involving parieto-frontal substrates. Classifiers trained with multivoxel patterns of conscious items generalised to predict the unconscious counterparts, indicating that their neural representations overlap. These findings suggest revisions to models of consciousness such as the neuronal global workspace. We then provide a computational simulation of visual processing/representation without perceptual sensitivity by using deep neural networks performing a similar visual task. The work provides a framework for pinpointing the representation of unconscious knowledge across different task domains.
]]></description>
<dc:creator>Mei, N.</dc:creator>
<dc:creator>Santana, R.</dc:creator>
<dc:creator>Soto, D.</dc:creator>
<dc:date>2021-01-14</dc:date>
<dc:identifier>doi:10.1101/2021.01.12.426428</dc:identifier>
<dc:title><![CDATA[Informative neural representations of unseen objects during higher-order processing in human brains and deep artificial networks]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-01-14</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.06.03.131573v1?rss=1">
<title>
<![CDATA[
A geometry of spike sequences: Fast, unsupervised discovery of high-dimensional neural spiking patterns based on optimal transport theory 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.06.03.131573v1?rss=1"
</link>
<description><![CDATA[
Neural coding and memory formation depend on temporal spiking sequences that span high-dimensional neural ensembles. The unsupervised discovery and characterization of these spiking sequences requires a suitable dissimilarity measure to spiking patterns, which can then be used for clustering and decoding. Here, we present a new dissimilarity measure based on optimal transport theory called SpikeShip, which compares multineuron spiking patterns based on all the relative spike-timing relationships among neurons. SpikeShip computes the optimal transport cost to make all the relative spike-timing relationships (across neurons) identical between two spiking patterns. We show that this transport cost can be decomposed into a temporal rigid translation term, which captures global latency shifts, and a vector of neuron-specific transport flows, which reflect inter-neuronal spike timing differences. SpikeShip can be effectively computed for high-dimensional neuronal ensembles, has a low (linear) computational cost that has the same order as the spike count, and is sensitive to higher-order correlations. Furthermore SpikeShip is binless, can handle any form of spike time distributions, is not affected by firing rate fluctuations, can detect patterns with a low signal-to-noise ratio, and can be effectively combined with a sliding window approach. We compare the advantages and differences between SpikeShip and other measures like SPIKE and Victor-Purpura distance. We applied SpikeShip to large-scale Neuropixel recordings during spontaneous activity and visual encoding. We show that high-dimensional spiking sequences detected via SpikeShip reliably distinguish between different natural images and different behavioral states. These spiking sequences carried complementary information to conventional firing rate codes. SpikeShip opens new avenues for studying neural coding and memory consolidation by rapid and unsupervised detection of temporal spiking patterns in high-dimensional neural ensembles.
]]></description>
<dc:creator>Sotomayor-Gomez, B.</dc:creator>
<dc:creator>Battaglia, F. P.</dc:creator>
<dc:creator>Vinck, M.</dc:creator>
<dc:date>2020-06-04</dc:date>
<dc:identifier>doi:10.1101/2020.06.03.131573</dc:identifier>
<dc:title><![CDATA[A geometry of spike sequences: Fast, unsupervised discovery of high-dimensional neural spiking patterns based on optimal transport theory]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-06-04</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/656850v1?rss=1">
<title>
<![CDATA[
A delay in sampling information from temporally autocorrelated visual stimuli 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/656850v1?rss=1"
</link>
<description><![CDATA[
Much of our world changes smoothly in time, yet the allocation of attention is typically studied with sudden changes - transients. When stimuli change gradually there is a sizeable lag between when a cue is presented and when an object is sampled (Carlson, Hogendoorn, & Verstraten, 2006; Sheth, Nijhawan & Shimojo, 2000). Yet this lag is not seen with rapid serial visual presentation (RSVP) stimuli where temporally uncorrelated stimuli are presented (Vul, Kanwisher & Nieuwenstein 2008; Goodbourn & Holcombe, 2015). These findings collectively suggest that temporal autocorrelation of a feature paradoxically increases the latency at which information is sampled. This hypothesis was tested by comparing stimuli changing smoothly in time (autocorrelated) to stimuli that change randomly. Participants attempted to report the color coincident with a visual cue. The result was a smaller selection lag for the randomly varying condition relative to the condition with a smooth color trajectory. Our third experiment finds that the increase in selection latency is due to the smoothness of the color change after the cue rather than extrapolated predictions based on the color changes presented before the cue. Together, these results support a theory of attentional drag, whereby attention remains engaged at a location longer when features are changing smoothly. A computational model provides insights into neural mechanisms that might underlie the effect.
]]></description>
<dc:creator>Callahan-Flintoft, C.</dc:creator>
<dc:creator>Holcombe, A. O.</dc:creator>
<dc:creator>Wyble, B.</dc:creator>
<dc:date>2019-05-31</dc:date>
<dc:identifier>doi:10.1101/656850</dc:identifier>
<dc:title><![CDATA[A delay in sampling information from temporally autocorrelated visual stimuli]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2019-05-31</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.12.25.424385v1?rss=1">
<title>
<![CDATA[
Neuronal cascades shape whole-brain functional dynamics at rest 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.12.25.424385v1?rss=1"
</link>
<description><![CDATA[
At rest, mammalian brains display remarkable spatiotemporal complexity, evolving through recurrent brain states on a slow timescale of the order of tens of seconds. While the phenomenology of the resting state dynamics is valuable in distinguishing healthy and pathological brains, little is known about its underlying mechanisms. Here, we identify neuronal cascades as a potential mechanism. Using full-brain network modeling, we show that neuronal populations, coupled via a detailed structural connectome, give rise to large-scale cascades of firing rate fluctuations evolving at the same time scale of resting-state networks. The ignition and subsequent propagation of cascades depend upon the brain state and connectivity of each region. The largest cascades produce bursts of Blood-Oxygen-Level-Dependent (BOLD) co-fluctuations at pairs of regions across the brain, which shape the simulated resting-state network dynamics.We experimentally confirm these theoretical predictions. We demonstrate the existence and stability of intermittent epochs of functional connectivity comprising BOLD co-activation bursts in mice and human fMRI. We then provide evidence for the existence and leading role of the neuronal cascades in humans with simultaneous EEG/fMRI recordings. These results show that neuronal cascades are a major determinant of spontaneous fluctuations in brain dynamics at rest.

1 Significance StatementFunctional connectivity and its dynamics are widely used as a proxy of brain function and dysfunction. Their neuronal underpinnings remain unclear. Using connectome-based modeling, we link the fast microscopic neuronal scale to the slow emergent whole-brain dynamics. We show that cascades of neuronal activations spontaneously propagate in resting state-like conditions. The largest neuronal cascades result in the co-fluctuation of Blood-Oxygen-Level-Dependent signals at pairs of brain regions, which in turn translate to stable brain states. Thus, we provide a theoretical framework for the emergence and the dynamics of resting-state networks. We verify these predictions in empirical mouse fMRI and human EEG/fMRI datasets measured in resting states conditions. Our work sheds light on the multiscale mechanisms of brain function.
]]></description>
<dc:creator>Rabuffo, G.</dc:creator>
<dc:creator>Fousek, J.</dc:creator>
<dc:creator>Bernard, C.</dc:creator>
<dc:creator>Jirsa, V.</dc:creator>
<dc:date>2020-12-26</dc:date>
<dc:identifier>doi:10.1101/2020.12.25.424385</dc:identifier>
<dc:title><![CDATA[Neuronal cascades shape whole-brain functional dynamics at rest]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-12-26</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.09.16.459819v1?rss=1">
<title>
<![CDATA[
Influence of Rule and Reward-based Strategies on Inferences of Serial Order by Monkeys 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.09.16.459819v1?rss=1"
</link>
<description><![CDATA[
Knowledge of transitive relationships between items can contribute to learning the order of a set of stimuli from pairwise comparisons. However, cognitive mechanisms of transitive inferences based on rank order remain unclear, as are contributions of reward magnitude and rule-based inference. To explore these issues, we created a conflict between rule- and reward-based learning during a serial ordering task. Rhesus macaques learned two lists, each containing five stimuli, that were trained exclusively with adjacent pairs. Selection of the higher-ranked item resulted in rewards. "Small reward" lists yielded 2 drops of fluid reward, while "large reward" lists yielded 5 drops. Following training of adjacent pairs, monkeys were tested on novels pairs. One item was selected from each list, such that a ranking rule could conflict with preferences for large rewards. Differences in associated reward magnitude had a strong influence on accuracy, but we also observed a symbolic distance effect. That provided evidence of a rule-based influence on decisions. Reaction time comparisons suggested a conflict between rule and reward-based processes. We conclude that performance reflects the contributions of two strategies, and that a model-based strategy is employed in the face of a strong countervailing reward incentive.
]]></description>
<dc:creator>Ferhat, A.-T.</dc:creator>
<dc:creator>Jensen, G.</dc:creator>
<dc:creator>Terrace, H. S.</dc:creator>
<dc:creator>Ferrera, V. P.</dc:creator>
<dc:date>2021-09-17</dc:date>
<dc:identifier>doi:10.1101/2021.09.16.459819</dc:identifier>
<dc:title><![CDATA[Influence of Rule and Reward-based Strategies on Inferences of Serial Order by Monkeys]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-09-17</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.09.16.460722v1?rss=1">
<title>
<![CDATA[
Similar cognitive processing synchronizes brains, hearts, and eyes 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.09.16.460722v1?rss=1"
</link>
<description><![CDATA[
Neural, physiological and behavioral signals synchronize between human subjects in a variety of settings. Multiple hypotheses have been proposed to explain this interpersonal synchrony, but there is no clarity under which conditions it arises, for which signals, or whether there is a common underlying mechanism. We hypothesized that similar cognitive processing of a shared stimulus is the source of synchrony between subjects, measured here as inter-subject correlation. To test this we presented informative videos to participants in an attentive and distracted condition and subsequently measured information recall. Inter-subject correlation was observed for electro-encephalography, gaze position, pupil size and heart rate, but not respiration and head movements. The strength of correlation was co-modulated in the different signals, changed with attentional state, and predicted subsequent recall of information presented in the videos. There was robust within-subject coupling between brain, heart and eyes, but not respiration or head movements. The results suggest that inter-subject correlation is the result of similar cognitive processing and thus emerges only for those signals that exhibit a robust brain-body connection. While physiological and behavioral fluctuations may be driven by multiple features of the stimulus, correlation with other individuals is co-modulated by the level of attentional engagement with the stimulus.
]]></description>
<dc:creator>Madsen, J.</dc:creator>
<dc:creator>Parra, L. C.</dc:creator>
<dc:date>2021-09-20</dc:date>
<dc:identifier>doi:10.1101/2021.09.16.460722</dc:identifier>
<dc:title><![CDATA[Similar cognitive processing synchronizes brains, hearts, and eyes]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-09-20</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.03.27.437315v1?rss=1">
<title>
<![CDATA[
Learning brain dynamics for decoding and predicting individual differences 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.03.27.437315v1?rss=1"
</link>
<description><![CDATA[
Insights from functional Magnetic Resonance Imaging (fMRI), as well as recordings of large numbers of neurons, reveal that many cognitive, emotional, and motor functions depend on the multivariate interactions of brain signals. To decode brain dynamics, we propose an architecture based on recurrent neural networks to uncover distributed spatiotemporal signatures. We demonstrate the potential of the approach using human fMRI data during movie-watching data and a continuous experimental paradigm. The model was able to learn spatiotemporal patterns that supported 15-way movie-clip classification (~90%) at the level of brain regions, and binary classification of experimental conditions (~60%) at the level of voxels. The model was also able to learn individual differences in measures of fluid intelligence and verbal IQ at levels comparable or better than existing techniques. We propose a dimensionality reduction approach that uncovers low-dimensional trajectories and captures essential informational (i.e., classification related) properties of brain dynamics. Finally, saliency maps were employed to characterize brain-region/voxel importance, and uncovered how dynamic but consistent changes in fMRI activation influenced decoding performance. When applied at the level of voxels, our framework implements a dynamic version of multivariate pattern analysis. We believe our approach provides a powerful framework for visualizing, analyzing, and discovering dynamic spatially distributed brain representations during naturalistic conditions.1

Author summaryBrain signals are inherently dynamic and evolve in both space and time as a function of cognitive or emotional task condition or mental state. To characterize brain dynamics, we employed an architecture based on recurrent neural networks, and applied it to functional magnetic resonance imaging data from humans watching movies or during continuous experimental conditions. The model learned spatiotemporal patterns that allowed it to correctly classify which clip a participant was watching based entirely on data from other participants; the model also learned a binary classification of experimental conditions at the level of voxels. We developed a dimensionality reduction approach that uncovered low-dimensional "trajectories" and captured essential information properties of brain dynamics. When applied at the level of voxels, our framework implements a dynamic version of multivariate pattern analysis. We believe our approach provides a powerful framework for visualizing, analyzing, and discovering dynamic spatially distributed brain representations during naturalistic conditions.
]]></description>
<dc:creator>Pessoa, L.</dc:creator>
<dc:creator>Limbachia, C.</dc:creator>
<dc:creator>Misra, J.</dc:creator>
<dc:creator>Surampudi, S. G.</dc:creator>
<dc:creator>Venkatesh, M.</dc:creator>
<dc:creator>Jaja, J.</dc:creator>
<dc:date>2021-03-27</dc:date>
<dc:identifier>doi:10.1101/2021.03.27.437315</dc:identifier>
<dc:title><![CDATA[Learning brain dynamics for decoding and predicting individual differences]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-03-27</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.26.465919v1?rss=1">
<title>
<![CDATA[
Brain-inspired spiking neural network controller for a neurorobotic whisker system 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.26.465919v1?rss=1"
</link>
<description><![CDATA[
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model to study active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modelling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was properly connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behaviour experimentally recorded in mice.
]]></description>
<dc:creator>Antonietti, A.</dc:creator>
<dc:creator>Geminiani, A.</dc:creator>
<dc:creator>Negri, E.</dc:creator>
<dc:creator>D'Angelo, E. U.</dc:creator>
<dc:creator>Casellato, C.</dc:creator>
<dc:creator>Pedrocchi, A.</dc:creator>
<dc:date>2021-10-28</dc:date>
<dc:identifier>doi:10.1101/2021.10.26.465919</dc:identifier>
<dc:title><![CDATA[Brain-inspired spiking neural network controller for a neurorobotic whisker system]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-10-28</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.30.466617v1?rss=1">
<title>
<![CDATA[
Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.30.466617v1?rss=1"
</link>
<description><![CDATA[
Medial entorhinal cortex (MEC) supports a wide range of navigational and memory related behaviors. Well-known experimental results have revealed specialized cell types in MEC -- e.g. grid, border, and head-direction cells -- whose highly stereotypical response profiles are suggestive of the role they might play in supporting MEC functionality. However, the majority of MEC neurons do not exhibit stereotypical firing patterns. How should the response profiles of these more "heterogeneous" cells be described, and how do they contribute to behavior? In this work, we took a computational approach to addressing these questions. We first performed a statistical analysis that shows that heterogeneous MEC cells are just as reliable in their response patterns as the more stereotypical cell types, suggesting that they have a coherent functional role. Next, we evaluated a spectrum of candidate models in terms of their ability to describe the response profiles of both stereotypical and heterogeneous MEC cells. We found that recently developed task-optimized neural network models are substantially better than traditional grid cell-centric models at matching most MEC neuronal response profiles -- including those of grid cells themselves -- despite not being explicitly trained for this purpose. Specific choices of network architecture (such as gated nonlinearities and an explicit intermediate place cell representation) have an important effect on the ability of the model to generalize to novel scenarios, with the best of these models closely approaching the noise ceiling of the data itself. We then performed in silico experiments on this model to address questions involving the relative functional relevance of various cell types, finding that heterogeneous cells are likely to be just as involved in downstream functional outcomes (such as path integration) as grid and border cells. Finally, inspired by recent data showing that, going beyond their spatial response selectivity, MEC cells are also responsive to non-spatial rewards, we introduce a new MEC model that performs reward-modulated path integration. We find that this unified model matches neural recordings across all variable-reward conditions. Taken together, our results point toward a conceptually principled goal-driven modeling approach for moving future experimental and computational efforts beyond overly-simplistic single-cell stereotypes.
]]></description>
<dc:creator>Nayebi, A.</dc:creator>
<dc:creator>Attinger, A.</dc:creator>
<dc:creator>Campbell, M. G.</dc:creator>
<dc:creator>Hardcastle, K.</dc:creator>
<dc:creator>Low, I. I. C.</dc:creator>
<dc:creator>Mallory, C. S.</dc:creator>
<dc:creator>Mel, G. C.</dc:creator>
<dc:creator>Sorscher, B.</dc:creator>
<dc:creator>Williams, A. H.</dc:creator>
<dc:creator>Ganguli, S.</dc:creator>
<dc:creator>Giocomo, L. M.</dc:creator>
<dc:creator>Yamins, D. L. K.</dc:creator>
<dc:date>2021-11-02</dc:date>
<dc:identifier>doi:10.1101/2021.10.30.466617</dc:identifier>
<dc:title><![CDATA[Explaining heterogeneity in medial entorhinal cortex with task-driven neural networks]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-11-02</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.11.07.467591v1?rss=1">
<title>
<![CDATA[
Weak Coupling Between Spontaneous Local Cortical Activity State Switches Under Anesthesia Leads to Strongly Correlated Global Cortical States 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.11.07.467591v1?rss=1"
</link>
<description><![CDATA[
Under anesthesia, neural dynamics deviate dramatically from those seen during wakefulness. During recovery from this perturbation, thalamocortical activity abruptly switches among a small set of metastable intermediate states. These metastable states and structured transitions among them form a scaffold that guides the brain back to the waking state. Here, we investigate the mechanisms that constrain cortical activity to discrete states and give rise to abrupt transitions among them. If state transitions were imposed onto the thalamocortical system by changes in the subcortical modulation, different cortical sites should exhibit near-synchronous state transitions. To test this hypothesis, we quantified state synchrony at different cortical sites in anesthetized rats. States were defined by compressing spectra of layer-specific local field potentials (LFPs) in visual and motor cortices. Transition synchrony, mutual information, and canonical correlations all demonstrate that most state transitions in the cortex are local and that coupling between sites is weak. Fluctuations in the LFP in the thalamic input layer 4 were particularly dissimilar from those in supra- and infra-granular layers. Thus, our results suggest that the discrete global cortical states are not imposed by the ascending modulatory pathways but emerge from the multitude of weak pairwise interactions within the cortex.
]]></description>
<dc:creator>Blackwood, E. B.</dc:creator>
<dc:creator>Shortal, B. P.</dc:creator>
<dc:creator>Proekt, A.</dc:creator>
<dc:date>2021-11-08</dc:date>
<dc:identifier>doi:10.1101/2021.11.07.467591</dc:identifier>
<dc:title><![CDATA[Weak Coupling Between Spontaneous Local Cortical Activity State Switches Under Anesthesia Leads to Strongly Correlated Global Cortical States]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-11-08</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.31.466638v1?rss=1">
<title>
<![CDATA[
Integrating Task-Based Functional MRI Across Tasks Markedly Boosts Prediction and Reliability of Brain-Cognition Relationship 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.31.466638v1?rss=1"
</link>
<description><![CDATA[
Capturing individual differences in cognition is central to human neuroscience. Yet our ability to estimate cognitive abilities via brain MRI is still poor in both prediction and reliability. Our study tested if this inability can be improved by integrating MRI signals across the whole brain and across modalities, including task-based functional MRI (tfMRI) of different tasks along with other non-task MRI modalities, such as structural MRI, resting-state functional connectivity. Using the Human Connectome Project (n=873, 473 females, after quality control), we directly compared predictive models comprising different sets of MRI modalities (e.g., seven tasks vs. non-task modalities). We applied two approaches to integrate multimodal MRI, stacked vs. flat models, and implemented 16 combinations of machine-learning algorithms. The stacked model integrating all modalities via stacking Elastic Net provided the best prediction (r=.57), relatively to other models tested, as well as excellent test-retest reliability (ICC=~.85) in capturing general cognitive abilities. Importantly, compared to the stacked model integrating across non-task modalities (r=.27), the stacked model integrating tfMRI across tasks led to significantly higher prediction (r=.56) while still providing excellent test-retest reliability (ICC=~.83). The stacked model integrating tfMRI across tasks was driven by frontal and parietal areas and by tasks that are cognition-related (working-memory, relational processing, and language). This result is consistent with the parieto-frontal integration theory of intelligence. Accordingly, our results contradict the recently popular notion that tfMRI is not reliable enough to capture individual differences in cognition. Instead, our study suggests that tfMRI, when used appropriately (i.e., by drawing information across the whole brain and across tasks and by integrating with other modalities), provides predictive and reliable sources of information for individual differences in cognitive abilities, more so than non-task modalities.

HighlightsO_LINon-task MRI (sMRI, rs-fMRI) are often used for the brain-cognition relationship.
C_LIO_LITask-based fMRI has been deemed unreliable for capturing individual differences.
C_LIO_LIWe tested if drawing task-based fMRI information across regions/tasks improves prediction and reliability of the brain-cognition relationship.
C_LIO_LIOur approach boosts prediction of task-based fMRI over non-task MRI.
C_LIO_LIOur approach renders task-based fMRI reliable over time.
C_LIO_LIOur approach shows the importance of the fronto-parietal areas in cognition.
C_LI
]]></description>
<dc:creator>Tetereva, A.</dc:creator>
<dc:creator>Li, J.</dc:creator>
<dc:creator>Deng, J.</dc:creator>
<dc:creator>Stringaris, A.</dc:creator>
<dc:creator>Pat, N.</dc:creator>
<dc:date>2021-11-02</dc:date>
<dc:identifier>doi:10.1101/2021.10.31.466638</dc:identifier>
<dc:title><![CDATA[Integrating Task-Based Functional MRI Across Tasks Markedly Boosts Prediction and Reliability of Brain-Cognition Relationship]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-11-02</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.10.31.466667v1?rss=1">
<title>
<![CDATA[
Sequence anticipation and STDP emerge from a voltage-based predictive learning rule 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.10.31.466667v1?rss=1"
</link>
<description><![CDATA[
Intelligent behavior depends on the brains ability to anticipate future events. However, the learning rules that enable neurons to predict and fire ahead of sensory inputs remain largely unknown. We propose a plasticity rule based on predictive processing, where the neuron learns a low-rank model of the synaptic input dynamics in its membrane potential. Neurons thereby amplify those synapses that maximally predict other synaptic inputs based on their temporal relations, which provide a solution to an optimization problem that can be implemented at the single-neuron level using only local information. Consequently, neurons learn sequences over long timescales and shift their spikes towards the first inputs in a sequence. We show that this mechanism can explain the development of anticipatory signalling and recall in a recurrent network. Furthermore, we demonstrate that the learning rule gives rise to several experimentally observed STDP (spike-timing-dependent plasticity) mechanisms. These findings suggest prediction as a guiding principle to orchestrate learning and synaptic plasticity in single neurons.
]]></description>
<dc:creator>Saponati, M.</dc:creator>
<dc:creator>Vinck, M.</dc:creator>
<dc:date>2021-11-03</dc:date>
<dc:identifier>doi:10.1101/2021.10.31.466667</dc:identifier>
<dc:title><![CDATA[Sequence anticipation and STDP emerge from a voltage-based predictive learning rule]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-11-03</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.11.30.470555v1?rss=1">
<title>
<![CDATA[
Multiscale and extended retrieval of associative memory structures in a cortical model of local-global inhibition balance 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.11.30.470555v1?rss=1"
</link>
<description><![CDATA[
Inhibitory neurons take on many forms and functions. How this diversity contributes to memory function is not completely known. Previous formal studies indicate inhibition differentiated by local and global connectivity in associative memory networks functions to rescale the level of retrieval of excitatory assemblies. However, such studies lack biological details such as a distinction between types of neurons (excitatory and inhibitory), unrealistic connection schemas, and non-sparse assemblies. In this study, we present a rate-based cortical model where neurons are distinguished (as excitatory, local inhibitory, or global inhibitory), connected more realistically, and where memory items correspond to sparse excitatory assemblies. We use this model to study how local-global inhibition balance can alter memory retrieval in associative memory structures, including naturalistic and artificial structures. Experimental studies have reported inhibitory neurons and their sub-types uniquely respond to specific stimuli and can form sophisticated, joint excitatory-inhibitory assemblies. Our model suggests such joint assemblies, as well as a distribution and rebalancing of overall inhibition between two inhibitory sub-populations - one connected to excitatory assemblies locally and the other connected globally - can quadruple the range of retrieval across related memories. We identify a possible functional role for local-global inhibitory balance to, in the context of choice or preference of relationships, permit and maintain a broader range of memory items when local inhibition is dominant and conversely consolidate and strengthen a smaller range of memory items when global inhibition is dominant. This model therefore highlights a biologically-plausible and behaviourally-useful function of inhibitory diversity in memory.
]]></description>
<dc:creator>Burns, T. F.</dc:creator>
<dc:creator>Haga, T. F.</dc:creator>
<dc:creator>Fukai, T.</dc:creator>
<dc:date>2021-12-01</dc:date>
<dc:identifier>doi:10.1101/2021.11.30.470555</dc:identifier>
<dc:title><![CDATA[Multiscale and extended retrieval of associative memory structures in a cortical model of local-global inhibition balance]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-12-01</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.01.13.426570v1?rss=1">
<title>
<![CDATA[
A large-scale neural network training framework for generalized estimation of single-trial population dynamics 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.01.13.426570v1?rss=1"
</link>
<description><![CDATA[
Recent technical advances have enabled recording of increasingly large populations of neural activity, even during natural, unstructured behavior. Deep sequential autoencoders are the current state-of-the-art for uncovering dynamics from these datasets. However, these highly complex models include many non-trainable hyperparameters (HPs) that are typically hand tuned with reference to supervisory information (e.g., behavioral data). This process is cumbersome and time consuming and biases model selection toward models with good representations of individual supervisory variables. Additionally, it cannot be applied to cognitive areas or unstructured tasks for which supervisory information is unavailable. Here we demonstrate AutoLFADS, an automated model-tuning framework that can characterize dynamics using only neural data, without the need for supervisory information. This enables inference of dynamics out-of-the-box in diverse brain areas and behaviors, which we demonstrate on several datasets: motor cortex during free-paced reaching, somatosensory cortex during reaching with perturbations, and dorsomedial frontal cortex during cognitive timing tasks. We also provide a cloud software package and comprehensive tutorials that enable new users to apply the method without dedicated computing resources.
]]></description>
<dc:creator>Keshtkaran, M. R.</dc:creator>
<dc:creator>Sedler, A. R.</dc:creator>
<dc:creator>Chowdhury, R. H.</dc:creator>
<dc:creator>Tandon, R.</dc:creator>
<dc:creator>Basrai, D.</dc:creator>
<dc:creator>Nguyen, S. L.</dc:creator>
<dc:creator>Sohn, H.</dc:creator>
<dc:creator>Jazayeri, M.</dc:creator>
<dc:creator>Miller, L. E.</dc:creator>
<dc:creator>Pandarinath, C.</dc:creator>
<dc:date>2021-01-15</dc:date>
<dc:identifier>doi:10.1101/2021.01.13.426570</dc:identifier>
<dc:title><![CDATA[A large-scale neural network training framework for generalized estimation of single-trial population dynamics]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-01-15</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.11.21.469441v1?rss=1">
<title>
<![CDATA[
A deep learning framework for inference of single-trial neural population activity from calcium imaging with sub-frame temporal resolution 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.11.21.469441v1?rss=1"
</link>
<description><![CDATA[
In many brain areas, neural populations act as a coordinated network whose state is tied to behavior on a moment-by-moment basis and millisecond timescale. Two-photon (2p) calcium imaging is a powerful tool to probe network-scale computation, as it can measure the activity of many individual neurons, monitor multiple cortical layers simultaneously, and sample from identified cell types. However, estimating network state and dynamics from 2p measurements has proven challenging because of noise, inherent nonlinearities, and limitations on temporal resolution. Here we describe RADICaL, a deep learning method to overcome these limitations at the population level. RADICaL extends methods that exploit dynamics in spiking activity for application to deconvolved calcium signals, whose statistics and temporal dynamics are quite distinct from electrophysiologically-recorded spikes. It incorporates a novel network training strategy that capitalizes on the timing of 2p sampling to recover network dynamics with high temporal precision. In synthetic tests, RADICaL infers network state more accurately than previous methods, particularly for high-frequency components. In real 2p recordings from sensorimotor areas in mice performing a "water grab" task, RADICaL infers network state with close correspondence to single-trial variations in behavior, and maintains high-quality inference even when neuronal populations are substantially reduced.
]]></description>
<dc:creator>Zhu, F.</dc:creator>
<dc:creator>Grier, H. A.</dc:creator>
<dc:creator>Tandon, R.</dc:creator>
<dc:creator>Cai, C.</dc:creator>
<dc:creator>Giovannucci, A.</dc:creator>
<dc:creator>Kaufman, M. T.</dc:creator>
<dc:creator>Pandarinath, C.</dc:creator>
<dc:date>2021-11-21</dc:date>
<dc:identifier>doi:10.1101/2021.11.21.469441</dc:identifier>
<dc:title><![CDATA[A deep learning framework for inference of single-trial neural population activity from calcium imaging with sub-frame temporal resolution]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-11-21</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.09.30.321752v1?rss=1">
<title>
<![CDATA[
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.09.30.321752v1?rss=1"
</link>
<description><![CDATA[
Task-trained artificial recurrent neural networks (RNNs) provide a computational modeling framework of increasing interest and application in computational, systems, and cognitive neuroscience. RNNs can be trained, using deep learning methods, to perform cognitive tasks used in animal and human experiments, and can be studied to investigate potential neural representations and circuit mechanisms underlying cognitive computations and behavior. Widespread application of these approaches within neuroscience has been limited by technical barriers in use of deep learning software packages to train network models. Here we introduce PsychRNN, an accessible, flexible, and extensible Python package for training RNNs on cognitive tasks. Our package is designed for accessibility, for researchers to define tasks and train RNN models using only Python and NumPy without requiring knowledge of deep learning software. The training backend is based on TensorFlow and is readily extensible for researchers with TensorFlow knowledge to develop projects with additional customization. PsychRNN implements a number of specialized features to support applications in systems and cognitive neuroscience. Users can impose neurobiologically relevant constraints on synaptic connectivity patterns. Furthermore, specification of cognitive tasks has a modular structure, which facilitates parametric variation of task demands to examine their impact on model solutions. PsychRNN also enables task shaping during training, or curriculum learning, in which tasks are adjusted in closed-loop based on performance. Shaping is ubiquitous in training of animals in cognitive tasks, and PsychRNN allows investigation of how shaping trajectories impact learning and model solutions. Overall, the PsychRNN framework facilitates application of trained RNNs in neuroscience research.

Visual Abstract

O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=100 SRC="FIGDIR/small/321752v1_ufig1.gif" ALT="Figure 1">
View larger version (25K):
org.highwire.dtl.DTLVardef@1541513org.highwire.dtl.DTLVardef@133c86borg.highwire.dtl.DTLVardef@aaf099org.highwire.dtl.DTLVardef@99d55b_HPS_FORMAT_FIGEXP  M_FIG Example workflow for using PsychRNN. First, the task of interest is defined, and a recurrent neural network model is trained to perform the task, optionally with neurobiologically informed constraints on the network. After the network is trained, the researchers can investigate network properties including the synaptic connectivity patterns and the dynamics of neural population activity during task execution, and other studies, e.g. those on perturbations, can be explored. The dotted line shows the possible repetition of this cycle with one network, which allows investigation of training effects of task shaping, or curriculum learning, for closed-loop training of the network on a progression of tasks.

C_FIG Significance StatementArtificial recurrent neural network (RNN) modeling is of increasing interest within computational, systems, and cognitive neuroscience, yet its proliferation as a computational tool within the field has been limited due to technical barriers in use of specialized deep-learning software. PsychRNN provides an accessible, flexible, and powerful framework for training RNN models on cognitive tasks. Users can define tasks and train models using the Python-based interface which enables RNN modeling studies without requiring user knowledge of deep learning software. PsychRNNs modular structure facilitates task specification and incorporation of neurobiological constraints, and supports extensibility for users with deep learning expertise. PsychRNNs framework for RNN modeling will increase accessibility and reproducibility of this approach across neuroscience subfields.
]]></description>
<dc:creator>Ehrlich, D. B.</dc:creator>
<dc:creator>Stone, J. T.</dc:creator>
<dc:creator>Brandfonbrener, D.</dc:creator>
<dc:creator>Atanasov, A.</dc:creator>
<dc:creator>Murray, J. D.</dc:creator>
<dc:date>2020-10-01</dc:date>
<dc:identifier>doi:10.1101/2020.09.30.321752</dc:identifier>
<dc:title><![CDATA[PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-10-01</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.05.19.492685v1?rss=1">
<title>
<![CDATA[
Parieto-frontal Oscillations Show Hand Specific Interactions with Top-Down Movement Plans 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.05.19.492685v1?rss=1"
</link>
<description><![CDATA[
To generate a hand-specific reach plan, the brain must integrate hand-specific signals with the desired movement strategy. Although various neurophysiology / imaging studies have investigated hand-target interactions in simple reach-to-target tasks, the whole-brain timing and distribution of this process remain unclear, especially for more complex, instruction-dependent motor strategies. Previously, we showed that a pro/anti-pointing instruction influences magnetoencephalographic (MEG) signals in frontal cortex that then propagate recurrently through parietal cortex (Blohm et al., 2019). Here, we contrasted left versus right hand pointing in the same task to investigate 1) which cortical regions of interest show hand specificity, and 2) which of those areas interact with the instructed motor plan. Eight bilateral areas - the parietooccipital junction (POJ), superior parietooccipital cortex (SPOC), supramarginal gyrus (SMG), middle / anterior interparietal sulcus (mIPS/aIPS), primary somatosensory / motor cortex (S1/M1), and dorsal premotor cortex (PMd) - showed hand-specific changes in beta band power, with four of these (M1, S1, SMG, aIPS) showing robust activation before movement onset. M1, SMG, SPOC, and aIPS showed significant interactions between contralateral hand specificity and the instructed motor plan, but not with bottom-up target signals. Separate hand / motor signals emerged relatively early and lasted through execution, whereas hand-motor interactions only occurred close to movement onset. Taken together with our previous results, these findings show that instruction-dependent motor plans emerge in frontal cortex and interact recurrently with hand-specific parietofrontal signals before movement onset to produce hand-specific motor behaviors.

Impact StatementThe brain must generate different motor signals, depending which hand is used. The distribution and timing of hand use / instructed motor plan integration is not understood at the whole-brain level. Using whole-brain MEG recordings we show that different sub-networks involved in action planning code for hand usage (alpha and beta frequencies) and integrating hand use information into a hand-specific motor plan (beta band). The timing of these signals indicates that frontal cortex first creates a general motor plan and then integrates hand-specific frontoparietal information to produce a hand-specific motor plan.
]]></description>
<dc:creator>Blohm, G.</dc:creator>
<dc:creator>Cheyne, D. O.</dc:creator>
<dc:creator>Crawford, J. D.</dc:creator>
<dc:date>2022-05-20</dc:date>
<dc:identifier>doi:10.1101/2022.05.19.492685</dc:identifier>
<dc:title><![CDATA[Parieto-frontal Oscillations Show Hand Specific Interactions with Top-Down Movement Plans]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-05-20</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.08.31.272450v1?rss=1">
<title>
<![CDATA[
A deep learning toolbox for noise-optimized, generalized spike inference from calcium imaging data 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.08.31.272450v1?rss=1"
</link>
<description><![CDATA[
Calcium imaging is a key method to record patterns of neuronal activity across populations of identified neurons. Inference of temporal patterns of action potentials ( spikes) from calcium signals is, however, challenging and often limited by the scarcity of ground truth data containing simultaneous measurements of action potentials and calcium signals. To overcome this problem, we compiled a large and diverse ground truth database from publicly available and newly performed recordings. This database covers various types of calcium indicators, cell types, and signal-to-noise ratios and comprises a total of >35 hours from 298 neurons. We then developed a novel algorithm for spike inference (CASCADE) that is based on supervised deep networks, takes advantage of the ground truth database, infers absolute spike rates, and outperforms existing model-based algorithms. To optimize performance for unseen imaging data, CASCADE retrains itself by resampling ground truth data to match the respective sampling rate and noise level. As a consequence, no parameters need to be adjusted by the user. To facilitate routine application of CASCADE we developed systematic performance assessments for unseen data, we openly release all resources, and we provide a user-friendly cloud-based implementation.
]]></description>
<dc:creator>Rupprecht, P.</dc:creator>
<dc:creator>Carta, S.</dc:creator>
<dc:creator>Hoffmann, A.</dc:creator>
<dc:creator>Echizen, M.</dc:creator>
<dc:creator>Kitamura, K.</dc:creator>
<dc:creator>Helmchen, F.</dc:creator>
<dc:creator>Friedrich, R. W.</dc:creator>
<dc:date>2020-09-01</dc:date>
<dc:identifier>doi:10.1101/2020.08.31.272450</dc:identifier>
<dc:title><![CDATA[A deep learning toolbox for noise-optimized, generalized spike inference from calcium imaging data]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-09-01</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.05.11.087825v1?rss=1">
<title>
<![CDATA[
Visual stimulus-specific habituation of innate defensive behaviour in mice 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.05.11.087825v1?rss=1"
</link>
<description><![CDATA[
Innate defensive responses such as freezing or escape are essential for animal survival. Mice show defensive behaviour to stimuli sweeping overhead, like a bird cruising the sky. Here, we found that mice reduced their defensive freezing after sessions with a stimulus passing overhead repeatedly. This habituation is stimulus-specific, as mice freeze again to a novel shape. This allows us to investigate the invariances in the mouse visual system. The mice generalize over retinotopic location and over size and shape, but distinguish objects when they differ in both size and shape. Innate visual defensive responses are thus strongly influenced by previous experience as mice learn to ignore specific stimuli. This form of learning occurs at the level of a location-independent representation.
]]></description>
<dc:creator>Tafreshiha, A.</dc:creator>
<dc:creator>Van den Burg, S. A.</dc:creator>
<dc:creator>Smits, K.</dc:creator>
<dc:creator>Blömer, L. A.</dc:creator>
<dc:creator>Heimel, J. A.</dc:creator>
<dc:date>2020-05-13</dc:date>
<dc:identifier>doi:10.1101/2020.05.11.087825</dc:identifier>
<dc:title><![CDATA[Visual stimulus-specific habituation of innate defensive behaviour in mice]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-05-13</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.03.02.433514v1?rss=1">
<title>
<![CDATA[
Transient beta activity and connectivity during sustained motor behaviour 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.03.02.433514v1?rss=1"
</link>
<description><![CDATA[
Neural oscillations are thought to play a central role in orchestrating activity states between distant neural populations. In humans, long-range neural connectivity has been particularly well characterised for 13-30 Hz beta activity which becomes phase coupled between the motor cortex and the contralateral muscle during isometric contraction. Based on this and related observations, beta activity and connectivity have been linked to sustaining stable cognitive and motor states - or the  status quo - in the brain. Recently, however, beta activity has been shown to be short-lived, as opposed to sustained - though so far this has been reported for regional beta activity in tasks without sustained motor demands. Here, we measured magnetoencephalography (MEG) and electromyography (EMG) in 18 human participants performing an isometric-contraction (gripping) task designed to yield sustained behavioural output. If cortico-muscular beta connectivity is directly responsible for sustaining a stable motor state, then beta activity should be (or become) sustained in this context. In contrast, we found that beta activity and connectivity with the downstream muscle were transient, even when participants engaged in sustained gripping. Moreover, we found that sustained motor requirements did not prolong beta-event duration in comparison to rest. These findings suggest that long-range neural synchronisation may entail short  bursts of frequency-specific connectivity, even when task demands - and behaviour - are sustained.

HighlightsO_LITrial-average 13-30 Hz beta activity and connectivity with the muscle appear sustained during stable motor behaviour
C_LIO_LISingle-trial beta activity and connectivity are short-lived, even when motor behaviour is sustained
C_LIO_LISustained task demands do not prolong beta-event duration in comparison to resting state
C_LI
]]></description>
<dc:creator>Echeverria-Altuna, I.</dc:creator>
<dc:creator>Quinn, A. J.</dc:creator>
<dc:creator>Zokaei, N.</dc:creator>
<dc:creator>Woolrich, M. W.</dc:creator>
<dc:creator>Nobre, A. C.</dc:creator>
<dc:creator>van Ede, F.</dc:creator>
<dc:date>2021-03-02</dc:date>
<dc:identifier>doi:10.1101/2021.03.02.433514</dc:identifier>
<dc:title><![CDATA[Transient beta activity and connectivity during sustained motor behaviour]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-03-02</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.12.30.424873v1?rss=1">
<title>
<![CDATA[
Animal-to-Animal Variability in Hippocampal Remapping 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.12.30.424873v1?rss=1"
</link>
<description><![CDATA[
Hippocampal place cells form a map of an animals environment. When the animal moves to a new environment, place field locations and firing rates change, a phenomenon known as remapping. Different animals can have different remapping responses to the same environments. This variability across animals in remapping behavior is not well understood. In this work, we analyzed electrophysiological recordings from Alme et al. (2014), in which five male rats were exposed to 11 different environments. To compare the hippocampal maps in two rooms, we computed average rate map correlation coefficients. We discovered that the heterogeneity in animals remapping behavior is structured: animals remapping behavior is consistent across a range of independent comparisons. Our findings highlight that remapping behavior between repeated environments depends on animal-specific factors.
]]></description>
<dc:creator>Nilchian, P.</dc:creator>
<dc:creator>Wilson, M. A.</dc:creator>
<dc:creator>Sanders, H.</dc:creator>
<dc:date>2021-01-02</dc:date>
<dc:identifier>doi:10.1101/2020.12.30.424873</dc:identifier>
<dc:title><![CDATA[Animal-to-Animal Variability in Hippocampal Remapping]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-01-02</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.12.07.415125v1?rss=1">
<title>
<![CDATA[
Can we conjointly record direct interactions between neurons in vivo in anatomically-connected brain areas? Probabilistic analyses and further implications. 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.12.07.415125v1?rss=1"
</link>
<description><![CDATA[
Large-scale simultaneous in vivo recordings of neurons in multiple brain regions raises the question of the probability of recording direct interactions of neurons within, and between, multiple brain regions. In turn, identifying inter-regional communication rules between neurons during behavioural tasks might be possible, assuming conjoint activity between neurons in connected brain regions can be detected. Using the hypergeometric distribution, and employing anatomically-tractable connection mapping between regions, we derive a method to calculate the probability distribution of  recordable connections between groups of neurons. This mathematically-derived distribution is validated by Monte Carlo simulations of directed graphs representing the underlying anatomical connectivity structure. We apply this method to simulated graphs with multiple neurons, based on counts in rat brain regions, and to connection matrices from the Blue Brain model of the mouse neocortex connectome. Overall, we find low probabilities of simultaneously-recording directly interacting neurons in vivo in anatomically-connected regions with standard (tetrode-based) approaches. We suggest alternative approaches, including new recording technologies and summing neuronal activity over larger scales, offer promise for testing hypothesised interregional communication and source transformation rules.
]]></description>
<dc:creator>Martin, S. K.</dc:creator>
<dc:creator>Aggleton, J.</dc:creator>
<dc:creator>O'Mara, S.</dc:creator>
<dc:date>2020-12-08</dc:date>
<dc:identifier>doi:10.1101/2020.12.07.415125</dc:identifier>
<dc:title><![CDATA[Can we conjointly record direct interactions between neurons in vivo in anatomically-connected brain areas? Probabilistic analyses and further implications.]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-12-08</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.10.07.329698v1?rss=1">
<title>
<![CDATA[
Neural dynamics of semantic categorization in semantic variant of Primary Progressive Aphasia. 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.10.07.329698v1?rss=1"
</link>
<description><![CDATA[
Semantic representations are processed along a posterior-to-anterior gradient reflecting a shift from perceptual (e.g., it has eight legs) to conceptual (e.g., venomous spiders are rare) information. One critical region is the anterior temporal lobe (ATL): patients with semantic variant primary progressive aphasia (svPPA), a clinical syndrome associated with ATL neurodegeneration, manifest a deep loss of semantic knowledge. We test the hypothesis that svPPA patients perform semantic tasks by over-recruiting areas implicated in perceptual processing. We compared MEG recordings of svPPA patients and healthy controls during a categorization task. While behavioral performance did not differ, svPPA patients showed indications of greater activation over bilateral occipital cortices and superior temporal gyrus, and inconsistent engagement of frontal regions. These findings suggest a pervasive reorganization of brain networks in response to ATL neurodegeneration: the loss of this critical hub leads to a dysregulated (semantic) control system, and defective semantic representations are seemingly compensated via enhanced perceptual processing.

Impact StatementFollowing anterior temporal lobe neurodegeneration, defective semantic representations are compensated via enhanced perceptual processing and associated with a dysregulation of the semantic control system.
]]></description>
<dc:creator>Borghesani, V.</dc:creator>
<dc:creator>Dale, C. L.</dc:creator>
<dc:creator>Lukic, S.</dc:creator>
<dc:creator>Hinkley, L. B. N.</dc:creator>
<dc:creator>Lauricella, M.</dc:creator>
<dc:creator>Shwe, W.</dc:creator>
<dc:creator>Miziuri, D.</dc:creator>
<dc:creator>Honma, S.</dc:creator>
<dc:creator>Miller, Z.</dc:creator>
<dc:creator>Miller, B. L.</dc:creator>
<dc:creator>Houde, J.</dc:creator>
<dc:creator>Gorno-Tempini, M. L.</dc:creator>
<dc:creator>Nagarajan, S.</dc:creator>
<dc:date>2020-10-09</dc:date>
<dc:identifier>doi:10.1101/2020.10.07.329698</dc:identifier>
<dc:title><![CDATA[Neural dynamics of semantic categorization in semantic variant of Primary Progressive Aphasia.]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-10-09</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/847798v1?rss=1">
<title>
<![CDATA[
Taking the sub-lexical route: brain dynamics of reading in the semantic variant of Primary Progressive Aphasia. 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/847798v1?rss=1"
</link>
<description><![CDATA[
Reading aloud requires mapping an orthographic form to a phonological one. The mapping process relies on sub-lexical statistical regularities (e.g., "oo" to |u{square}|) or on learned lexical associations between a specific visual form and a series of sounds (e.g., yacht to /j{square}t/). Computational, neuroimaging, and neuropsychological evidence suggest that sub-lexical, phonological and lexico-semantic processes rely on partially distinct neural substrates: a dorsal (occipito-parietal) and a ventral (occipito-temporal) route, respectively.

Here, we investigated the spatiotemporal features of orthography-to-phonology mapping, capitalizing on the time resolution of magnetoencephalography and the unique clinical model offered by patients with semantic variant of Primary Progressive Aphasia (svPPA). Behaviorally, svPPA patients manifest marked lexico-semantic impairments including difficulties in reading words with exceptional orthographic to phonological correspondence (irregular words). Moreover, they present with focal neurodegeneration in the anterior temporal lobe (ATL), affecting primarily the ventral, occipito-temporal, lexical route. Therefore, this clinical population allows for testing of specific hypotheses on the neural implementation of the dualroute model for reading, such as whether damage to one route can be compensated by over-reliance on the other. To this end, we reconstructed and analyzed time-resolved whole-brain activity in 12 svPPA patients and 12 healthy age-matched controls while reading irregular words (e.g., yacht) and pseudowords (e.g., pook).

Consistent with previous findings that the dorsal route is involved in sub-lexical, phonological processes, in control participants we observed enhanced neural activity over dorsal occipito-parietal cortices for pseudowords, when compared to irregular words. This activation was manifested in the beta-band (12-30 Hz), ramping up slowly over 500 ms after stimulus onset and peaking at [~]800 ms, around response selection and production. Consistent with our prediction, svPPA patients did not exhibit this temporal pattern of neural activity observed in controls this contrast. Furthermore, a direct comparison of neural activity between patients and controls revealed a dorsal spatiotemporal cluster during irregular word reading. These findings suggest that the sub-lexical/phonological route is involved in processing both irregular and pseudowords in svPPA.

Together these results provide further evidence supporting a dual-route model for reading aloud mediated by the interplay between lexico-semantic and sub-lexical/phonological neuro-cognitive systems. When the ventral route is damaged, as in the case of neurodegeneration affecting the ATL, partial compensation appears to be possible by over-recruitment of the slower, serial attention-dependent, dorsal one.

Abbreviated SummaryBorghesani et al. investigate brain dynamics during irregular word reading using magnetoencephalographic imaging in patients with semantic variant of primary progressive aphasia. Due to ventral anterior temporal lobe neurodegeneration, patients show greater reliance of dorsal, occipito-parietal brain regions - providing novel evidence for the interplay between ventral and dorsal routes for reading.
]]></description>
<dc:creator>Borghesani, V.</dc:creator>
<dc:creator>Hinkley, L. B.</dc:creator>
<dc:creator>Ranasinghe, K. G.</dc:creator>
<dc:creator>Thompson, M.</dc:creator>
<dc:creator>Shwe, W.</dc:creator>
<dc:creator>Mizuiri, D.</dc:creator>
<dc:creator>Lauricella, M.</dc:creator>
<dc:creator>Europa, E.</dc:creator>
<dc:creator>Honma, S.</dc:creator>
<dc:creator>Miller, Z.</dc:creator>
<dc:creator>Miller, B. L.</dc:creator>
<dc:creator>Vossel, K.</dc:creator>
<dc:creator>Houde, J. F.</dc:creator>
<dc:creator>Gorno-Tempini, M. L.</dc:creator>
<dc:creator>Nagarajan, S.</dc:creator>
<dc:date>2019-11-21</dc:date>
<dc:identifier>doi:10.1101/847798</dc:identifier>
<dc:title><![CDATA[Taking the sub-lexical route: brain dynamics of reading in the semantic variant of Primary Progressive Aphasia.]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2019-11-21</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/797548v1?rss=1">
<title>
<![CDATA[
Remembrance of things practiced: A two-pathway circuit for sequential learning 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/797548v1?rss=1"
</link>
<description><![CDATA[
The learning of motor skills unfolds over multiple timescales, with rapid initial gains in performance followed by a longer period in which the behavior becomes more refined, habitual, and automatized. While recent lesion and inactivation experiments have provided hints about how various brain areas might contribute to such learning, their precise roles and the neural mechanisms underlying them are not well understood. In this work, we propose neural- and circuit-level mechanisms by which motor cortex, thalamus, and striatum support such learning. In this model, the combination of fast cortical learning and slow subcortical learning gives rise to a covert learning process through which control of behavior is gradually transferred from cortical to subcortical circuits, while protecting learned behaviors that are practiced repeatedly against overwriting by future learning. Together, these results point to a new computational role for thalamus in motor learning, and, more broadly, provide a framework for understanding the neural basis of habit formation and the automatization of behavior through practice.
]]></description>
<dc:creator>Murray, J. M.</dc:creator>
<dc:creator>Escola, S.</dc:creator>
<dc:date>2019-10-08</dc:date>
<dc:identifier>doi:10.1101/797548</dc:identifier>
<dc:title><![CDATA[Remembrance of things practiced: A two-pathway circuit for sequential learning]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2019-10-08</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.11.07.373019v1?rss=1">
<title>
<![CDATA[
NBR: Network-based R-statistics for (unbalanced) longitudinal samples 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.11.07.373019v1?rss=1"
</link>
<description><![CDATA[
Network neuroscience models the brain as interacting elements. However, a large number of elements imply a vast number of interactions, making it difficult to assess which connections are relevant and which are spurious. Zalesky et al. (2010) proposed the Network-Based Statistics (NBS), which identifies clusters of connections and tests their likelihood via permutation tests. This framework shows a better trade-off of Type I and II errors compared to conventional multiple comparison corrections. NBS uses General Linear Hypothesis Testing (GLHT), which may underestimate the within-subject variance structure when dealing with longitudinal samples with a varying number of observations (unbalanced samples). We implemented NBR, an R-package that extends the NBS framework adding (non)linear mixed-effects (LME) models. LME models the within-subject variance in more detail, and deals with missing values more flexibly. To illustrate its advantages, we used a public dataset of 333 human participants (188/145 females/males; age range: 17.0-28.4 y.o.) with two (n=212) or three (n=121) sessions each. Sessions include a resting-state fMRI scan and psychometric data. State anxiety scores and connectivity matrices between brain lobes were extracted. We tested their relationship using GLHT and LME models for balanced and unbalanced datasets, respectively. Only the LME approach found a significant association between state anxiety and a subnetwork that includes the cingulum, frontal, parietal, occipital, and cerebellum. Given that missing data is very common in longitudinal studies, we expect that NBR will be very useful to explore unbalanced samples.

Significant StatementLongitudinal studies are increasing in neuroscience, providing new insights into the brain under treatment, development, or aging. Nevertheless, missing data is highly frequent in those studies, and conventional designs may discard incomplete observations or underestimate the within-subject variance. We developed a publicly available software (R package: NBR) that implements mixed-effect models into every possible connection in a sample of networks, and it can find significant subsets of connections using non-parametric permutation tests. We demonstrate that using NBR on larger unbalanced samples has higher statistical power than when exploring the balanced subsamples. Although this method is applicable in general network analysis, we anticipate this method being potentially useful in systems neuroscience considering the increase of longitudinal samples in the field.
]]></description>
<dc:creator>Gracia-Tabuenca, Z.</dc:creator>
<dc:creator>Alcauter, S.</dc:creator>
<dc:date>2020-11-08</dc:date>
<dc:identifier>doi:10.1101/2020.11.07.373019</dc:identifier>
<dc:title><![CDATA[NBR: Network-based R-statistics for (unbalanced) longitudinal samples]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-11-08</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/808154v1?rss=1">
<title>
<![CDATA[
Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification (PSID) 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/808154v1?rss=1"
</link>
<description><![CDATA[
Neural activity exhibits dynamics that in addition to a behavior of interest also relate to other brain functions or internal states. Understanding how neural dynamics explain behavior requires dissociating behaviorally relevant and irrelevant dynamics, which is not achieved with current neural dynamic models as they are learned without considering behavior. We develop a novel preferential subspace identification (PSID) algorithm that models neural activity while dissociating and prioritizing its behaviorally relevant dynamics. Applying PSID to large-scale neural activity in two monkeys performing naturalistic 3D reach-and-grasps uncovered new features for neural dynamics. First, PSID revealed the behaviorally relevant dynamics to be markedly lower-dimensional than otherwise implied. Second, PSID discovered distinct rotational dynamics that were more predictive of behavior. Finally, PSID more accurately learned the behaviorally relevant dynamics for each joint and recording channel. PSID provides a general new tool to reveal behaviorally relevant neural dynamics that can otherwise go unnoticed.
]]></description>
<dc:creator>Sani, O. G.</dc:creator>
<dc:creator>Pesaran, B.</dc:creator>
<dc:creator>Shanechi, M. M.</dc:creator>
<dc:date>2019-10-17</dc:date>
<dc:identifier>doi:10.1101/808154</dc:identifier>
<dc:title><![CDATA[Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification (PSID)]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2019-10-17</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.05.19.492713v1?rss=1">
<title>
<![CDATA[
Bisected graph matching improves automated pairing of bilaterally homologous neurons from connectomes 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.05.19.492713v1?rss=1"
</link>
<description><![CDATA[
Graph matching algorithms attempt to find the best correspondence between the nodes of two networks. These techniques have been used to match individual neurons in nanoscale connectomes - in particular, to find pairings of neurons across hemispheres. However, since graph matching techniques deal with two isolated networks, they have only utilized the ipsilateral (same hemisphere) subgraphs when performing the matching. Here, we present a modification to a state-of-the-art graph matching algorithm which allows it to solve what we call the bisected graph matching problem. This modification allows us to leverage the connections between the brain hemispheres when predicting neuron pairs. Via simulations and experiments on real connectome datasets, we show that this approach improves matching accuracy when sufficient edge correlation is present between the contralateral (between hemisphere) subgraphs. We also show how matching accuracy can be further improved by combining our approach with previously proposed extensions to graph matching, which utilize edge types and previously known neuron pairings. We expect that our proposed method will improve future endeavors to accurately match neurons across hemispheres in connectomes, and be useful in other applications where the bisected graph matching problem arises.
]]></description>
<dc:creator>Pedigo, B. D.</dc:creator>
<dc:creator>Winding, M.</dc:creator>
<dc:creator>Priebe, C. E.</dc:creator>
<dc:creator>Vogelstein, J. T.</dc:creator>
<dc:date>2022-05-20</dc:date>
<dc:identifier>doi:10.1101/2022.05.19.492713</dc:identifier>
<dc:title><![CDATA[Bisected graph matching improves automated pairing of bilaterally homologous neurons from connectomes]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-05-20</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.07.07.499214v1?rss=1">
<title>
<![CDATA[
Predicting the principal components of cortical morphological variables 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.07.07.499214v1?rss=1"
</link>
<description><![CDATA[
AO_SCPLOWBSTRACTC_SCPLOWThe generating mechanism for the gyrification of the mammalian cerebral cortex remains a central open question in neuroscience. Although many models have been proposed over the years, very few were able to provide empirically testable predictions. In this paper, we assume a model in which the cortex folds for all species of mammals according to a simple mechanism of effective free energy minimization of a growing self-avoiding surface subjected to inhomogeneous bulk stresses, to derive a new set of summary morphological variables that capture the most salient aspects of cortical shape and size. In terms of these new variables, we seek to understand the variance present in two morphometric datasets: a human MRI harmonized multi-site dataset comprised by 3324 healthy controls (CTL) from 4 to 96 years old and a collection of different mammalian cortices with morphological measurements extracted manually. This is done using a standard Principal Component Analysis (PCA) of the cortical morphometric space. We prove there is a remarkable coincidence (typically less than 8{whitebullet}) between the resulting principal components vectors in each datasets and the directions corresponding to the new variables. This shows that the new, theoretically-derived variables are a set of natural and independent morphometrics with which to express cortical shape and size.
]]></description>
<dc:creator>Mello, V. B. B.</dc:creator>
<dc:creator>de Moraes, F. H.</dc:creator>
<dc:creator>Mota, B.</dc:creator>
<dc:date>2022-07-08</dc:date>
<dc:identifier>doi:10.1101/2022.07.07.499214</dc:identifier>
<dc:title><![CDATA[Predicting the principal components of cortical morphological variables]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-07-08</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/031658v1?rss=1">
<title>
<![CDATA[
DataJoint: managing big scientific data using MATLAB or Python 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/031658v1?rss=1"
</link>
<description><![CDATA[
The rise of big data in modern research poses serious challenges for data management: Large and intricate datasets from diverse instrumentation must be precisely aligned, annotated, and processed in a variety of ways to extract new insights. While high levels of data integrity are expected, research teams have diverse backgrounds, are geographically dispersed, and rarely possess a primary interest in data science. Here we describe DataJoint, an open-source toolbox designed for manipulating and processing scientific data under the relational data model. Designed for scientists who need a flexible and expressive database language with few basic concepts and operations, DataJoint facilitates multiuser access, efficient queries, and distributed computing. With implementations in both MATLAB and Python, DataJoint is not limited to particular file formats, acquisition systems, or data modalities and can be quickly adapted to new experimental designs. DataJoint and related resources are available at http://datajoint.github.com.
]]></description>
<dc:creator>Dimitri Yatsenko</dc:creator>
<dc:creator>Jacob Reimer</dc:creator>
<dc:creator>Alexander S Ecker</dc:creator>
<dc:creator>Edgar Y Walker</dc:creator>
<dc:creator>Fabian Sinz</dc:creator>
<dc:creator>Philipp Berens</dc:creator>
<dc:creator>Andreas Hoenselaar</dc:creator>
<dc:creator>Ronald James Cotton</dc:creator>
<dc:creator>Athanassios S. Siapas</dc:creator>
<dc:creator>Andreas S. Tolias</dc:creator>
<dc:creator></dc:creator>
<dc:date>2015-11-14</dc:date>
<dc:identifier>doi:10.1101/031658</dc:identifier>
<dc:title><![CDATA[DataJoint: managing big scientific data using MATLAB or Python]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2015-11-14</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.03.30.437358v1?rss=1">
<title>
<![CDATA[
DataJoint Elements: Data Workflows for Neurophysiology 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.03.30.437358v1?rss=1"
</link>
<description><![CDATA[
A new resource--DataJoint Elements--provides modular designs for assembling complete workflow solutions to organize data and computations for common neurophysiology experiments. The designs are derived from working solutions developed in leading research groups using the open-source DataJoint framework to integrate data collection and analysis in collaborative workflows.
]]></description>
<dc:creator>Yatsenko, D.</dc:creator>
<dc:creator>Nguyen, T.</dc:creator>
<dc:creator>Shen, S.</dc:creator>
<dc:creator>Gunalan, K.</dc:creator>
<dc:creator>Turner, C. A.</dc:creator>
<dc:creator>Guzman, R.</dc:creator>
<dc:creator>Sasaki, M.</dc:creator>
<dc:creator>Sitonic, D.</dc:creator>
<dc:creator>Reimer, J.</dc:creator>
<dc:creator>Walker, E. Y.</dc:creator>
<dc:creator>Tolias, A.</dc:creator>
<dc:date>2021-03-30</dc:date>
<dc:identifier>doi:10.1101/2021.03.30.437358</dc:identifier>
<dc:title><![CDATA[DataJoint Elements: Data Workflows for Neurophysiology]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-03-30</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.03.21.485240v1?rss=1">
<title>
<![CDATA[
Mooney Face Image Processing in Deep Convolutional Neural Networks Compared to Humans 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.03.21.485240v1?rss=1"
</link>
<description><![CDATA[
Deep Convolutional Neural Networks (CNNs) are criticised for their reliance on local shape features and texture rather than global shape. We test whether CNNs are able to process global shape information in the absence of local shape cues and texture by testing their performance on Mooney stimuli, which are face images thresholded to binary values. More specifically, we assess whether CNNs classify these abstract stimuli as face-like, and whether they exhibit the face inversion effect (FIE), where upright stimuli are classified positively at a higher rate compared to inverted. We tested two standard networks, one (CaffeNet) trained for general object recognition and another trained specifically for facial recognition (DeepFace). We found that both networks perform perceptual completion and exhibit the FIE, which is present over all levels of specificity. By matching the false positive rate of CNNs to humans, we found that both networks performed closer to the human average (85.73% for upright, 57.25% for inverted) for both conditions (61.31% and 62.70% for upright, 48.61% and 42.26% for inverted, for CaffeNet and DeepFace respectively). Rank order correlation between CNNs and humans across individual stimuli shows a significant correlation in upright and inverted conditions, indicating a relationship in image difficulty between observers and the model. We conclude that in spite of the texture and local shape bias of CNNs, which makes their performance distinct from humans, they are still able to process object images holistically.
]]></description>
<dc:creator>Zeman, A.</dc:creator>
<dc:creator>Leers, T.</dc:creator>
<dc:creator>Op de Beeck, H.</dc:creator>
<dc:date>2022-03-23</dc:date>
<dc:identifier>doi:10.1101/2022.03.21.485240</dc:identifier>
<dc:title><![CDATA[Mooney Face Image Processing in Deep Convolutional Neural Networks Compared to Humans]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-03-23</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.06.25.449912v1?rss=1">
<title>
<![CDATA[
Assessing the influence of local neural activity on global connectivity fluctuations: Application to human intracranial EEG during a cognitive task 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.06.25.449912v1?rss=1"
</link>
<description><![CDATA[
Cognitive-relevant information is processed by different brain areas that cooperate to eventually produce a response. The relationship between local activity and global brain states during such processes, however, remains for the most part unexplored. To address this question, we designed a simple face-recognition task performed in patients with drug-resistant epilepsy and monitored with intracranial EEG. Based on our observations, we developed a novel analytical framework (named "local-global" framework) to statistically correlate the brain activity in every recorded gray-matter region with the widespread connectivity fluctuations as proxy to identify concurrent local activations and global brain phenomena that may plausibly reflect a common functional network during cognition. The application of the local-global framework to the data from 3 subjects showed that similar connectivity fluctuations found across patients were mainly coupled to the local activity of brain areas involved in face information processing. In particular, our findings provide preliminary evidence that the reported global measures might be a novel signature of functional brain activity reorganization when a stimulus is processed in a task context regardless of the specific recorded areas.

Data availability statementDue to institutional restrictions, the data that supports the findings of this study can be accessed only with a data sharing agreement. All code used in this work can be found at https://github.com/mvilavidal/localglobal2022.

Funding statementMVV was supported by a fellowship from "la Caixa" Foundation, Spain (ID 100010434, fellowship code LCF/BQ/DE17/11600022). MVV and ATC were supported by the Bial Foundation grant 106/18. GD and ATC were supported by the project "Cluster Emergent del Cervell Huma" (CECH, ref. 001-P-001682), within the framework of the European Research Development Fund Operational Program of Catalonia 2014-2020. GD was supported by a Spanish national research project (ref. PID2019-105772GB-I00 MCIU AEI) funded by the Spanish Ministry of Science, Innovation and Universities (MCIU), State Research Agency (AEI); HBP SGA3 Human Brain Project Specific Grant Agreement 3 (grant agreement no. 945539), funded by the EU H2020 FET Flagship programme; SGR Research Support Group support (ref. 2017 SGR 1545), funded by the Catalan Agency for Management of University and Research Grants (AGAUR); Neurotwin Digital twins for model-driven non-invasive electrical brain stimulation (grant agreement ID: 101017716) funded by the EU H2020 FET Proactive programme; euSNN European School of Network Neuroscience (grant agreement ID: 860563) funded by the EU H2020 MSCA-ITN Innovative Training Networks; Brain-Connects: Brain Connectivity during Stroke Recovery and Rehabilitation (id. 201725.33) funded by the Fundacio La Marato TV3; Corticity, FLAG-ERA JTC 2017, (ref. PCI2018-092891) funded by the Spanish Ministry of Science, Innovation and Universities (MCIU), State Research Agency (AEI).

Conflict of interest disclosureThe authors declare no conflicts of interest.

Ethics approval statementThe study was conducted in accordance with the Declaration of Helsinki. All diagnostic, surgical and experimental procedures have been previously approved by The Clinical Ethical Committee of Hospital Clinic (Barcelona, Spain). In particular, the specific proposal to run the cognitive experiments for this study was approved in March 2020 under the code number HCB/2020/0182.

Patient consent statementInformed consent was explicitly obtained from all participants prior to the recordings and the performance of the tasks.
]]></description>
<dc:creator>Vila-Vidal, M.</dc:creator>
<dc:creator>Khawaja, M.</dc:creator>
<dc:creator>Carreno, M.</dc:creator>
<dc:creator>Roldan, P.</dc:creator>
<dc:creator>Rumia, J.</dc:creator>
<dc:creator>Donaire, A.</dc:creator>
<dc:creator>Deco, G.</dc:creator>
<dc:creator>Tauste Campo, A.</dc:creator>
<dc:date>2021-06-26</dc:date>
<dc:identifier>doi:10.1101/2021.06.25.449912</dc:identifier>
<dc:title><![CDATA[Assessing the influence of local neural activity on global connectivity fluctuations: Application to human intracranial EEG during a cognitive task]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-06-26</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.12.31.474634v1?rss=1">
<title>
<![CDATA[
AVATAR: AI Vision Analysis for Three-dimensional Action in Real-time 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.12.31.474634v1?rss=1"
</link>
<description><![CDATA[
Artificial intelligence (AI) is an emerging tool for high-resolution behavioural analysis and conduction of human-free behavioural experiments. Here, we applied an AI-based system, AVATAR, which automatically virtualises 3D motions from the detection of 9 body parts. This allows quantification, classification and detection of specific action sequences in real-time and facilitates closed-loop manipulation, triggered by the onset of specific behaviours, in freely moving mice.
]]></description>
<dc:creator>Kim, D.</dc:creator>
<dc:creator>Kim, D.-G.</dc:creator>
<dc:creator>Shin, A.</dc:creator>
<dc:creator>Jeong, Y.-C.</dc:creator>
<dc:creator>Park, S.</dc:creator>
<dc:date>2022-01-02</dc:date>
<dc:identifier>doi:10.1101/2021.12.31.474634</dc:identifier>
<dc:title><![CDATA[AVATAR: AI Vision Analysis for Three-dimensional Action in Real-time]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-01-02</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.08.01.502338v1?rss=1">
<title>
<![CDATA[
Idiosyncratic relation between human brain activity and behavior 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.08.01.502338v1?rss=1"
</link>
<description><![CDATA[
Research in neuroscience often assumes universal neural mechanisms, but increasing evidence points towards sizeable individual differences in brain activations. What remains unclear is the extent of the idiosyncrasy and whether different types of analyses are associated with different levels of idiosyncrasy. Here we develop a new method for addressing these questions. The method consists of computing the within-subject reliability and subject-to-group similarity of brain activations and submitting these values to a computational model that quantifies the relative strength of group- and subject-level factors. We apply this method to a perceptual decision-making task (N=50) and find that activations related to task, reaction time (RT), and confidence are influenced equally strongly by group- and subject-level factors. Both group- and subject-level factors are dwarfed by a noise factor, though higher levels of smoothing increases their contributions relative to noise. Overall, our method allows for the quantification of group- and subject-level factors of brain activations and thus provides a more detailed understanding of the idiosyncrasy levels in brain activations.
]]></description>
<dc:creator>Nakuci, J.</dc:creator>
<dc:creator>Yeon, J.</dc:creator>
<dc:creator>Xue, K.</dc:creator>
<dc:creator>Kim, J.-H.</dc:creator>
<dc:creator>Kim, S.-P.</dc:creator>
<dc:creator>Rahnev, D.</dc:creator>
<dc:date>2022-08-02</dc:date>
<dc:identifier>doi:10.1101/2022.08.01.502338</dc:identifier>
<dc:title><![CDATA[Idiosyncratic relation between human brain activity and behavior]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-08-02</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.08.09.455769v1?rss=1">
<title>
<![CDATA[
Information theory-based approach towards studying anti-coincidence detection via graded amplitude dendritic action potentials. 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.08.09.455769v1?rss=1"
</link>
<description><![CDATA[
In contrast to typical all or none action potential, recent discovery of graded amplitude action potentials in cortical neurons enabled the dendrites to perform XOR computation, previously thought to be performed only at network level. Thus, these special neurons can perform anti-coincidence detection at the dendritic level, but a lot is unanswered about this phenomenon. Can such experimentally observed dendritic action potential generating system transmit information about stimuli having varying degrees of temporal overlap? Can the system add to the repertoire of computations performed at dendritic level by enhancing the information transmission about varying amplitude stimuli? In this information theory-based study done in single compartment and two-compartment dendritic models, it is shown that such a system can indeed transmit information about the temporal overlap of stimuli as well as amplitudes of stimuli even at high input noise levels. First, the calculation of mutual information between single stimulus and response i.e. I(S;R) with varying noise showed that the information about temporally overlapping nature of stimuli is precisely transmitted by such a system. Secondly, the time evolution of mutual information was simulated through data from the system and it positively reinforced the above-mentioned result. Next, varying amplitude input stimuli was provided to the system and calculation of mutual information between two stimuli and one response i.e. I(S1,S2;R) with varying noise levels revealed that such a system optimally transmits the information about stimuli even at high noise levels. Finally, calculation of this information measurement with respect to time in an experiment with constant overlap but varying input amplitude again positively reinforced the result.

Key PointsO_LIInformation theory-based measurements were employed to assess the role of graded amplitude dendritic action potentials.
C_LIO_LIAction potentials (APs) with maximal amplitudes for threshold level stimuli and lower amplitudes for stronger stimuli were modelled with high voltage Ca2+ (HVA like) channels, Ca2+ dependent (BK-like) channel, leak channel and calcium pump in a single compartment model and two compartment dendritic model.
C_LIO_LIAnalysis done here, on comparison with control compartment generating constant amplitude AP via standard Hodgkin-Huxley sodium potassium channel revealed that such a compartment shows optimal information transmission about both varying amplitudes input current stimuli as well as varying time overlap stimuli.
C_LI
]]></description>
<dc:creator>Sinha, V.</dc:creator>
<dc:date>2021-08-10</dc:date>
<dc:identifier>doi:10.1101/2021.08.09.455769</dc:identifier>
<dc:title><![CDATA[Information theory-based approach towards studying anti-coincidence detection via graded amplitude dendritic action potentials.]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-08-10</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.11.16.468862v1?rss=1">
<title>
<![CDATA[
Complementary population codes in the dorsal and ventral hippocampus during associative learning 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.11.16.468862v1?rss=1"
</link>
<description><![CDATA[
Animals associate cues with outcomes and continually update these associations as new information is presented. The hippocampus is crucial for this, yet how neurons track changes in cue-outcome associations remains unclear. Using 2-photon calcium imaging, we tracked the same dCA1 and vCA1 neurons across days to determine how responses evolve across phases of odor-outcome learning. We find that, initially, odors elicited robust responses in dCA1, whereas in vCA1 responses emerged after learning, including broad representations that stretched across cue, trace, and outcome periods. Population dynamics in both regions rapidly reorganized with learning, then stabilized into ensembles that stored odor representations for days, even after extinction or pairing with a different outcome. Finally, we found stable, robust signals across CA1 when anticipating reward, but not when anticipating inescapable shock. These results identify how the hippocampus encodes, stores, and updates learned associations, and illuminates the unique contributions of dorsal and ventral hippocampus.
]]></description>
<dc:creator>Biane, J. S.</dc:creator>
<dc:creator>Ladow, M. A.</dc:creator>
<dc:creator>Stefanini, F.</dc:creator>
<dc:creator>Boddu, S. P.</dc:creator>
<dc:creator>Fan, A.</dc:creator>
<dc:creator>Hassan, S.</dc:creator>
<dc:creator>Dundar, N.</dc:creator>
<dc:creator>Apodaca-Montano, D. L.</dc:creator>
<dc:creator>Woods, N. I.</dc:creator>
<dc:creator>Khierbek, M. A.</dc:creator>
<dc:date>2021-11-18</dc:date>
<dc:identifier>doi:10.1101/2021.11.16.468862</dc:identifier>
<dc:title><![CDATA[Complementary population codes in the dorsal and ventral hippocampus during associative learning]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-11-18</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.06.03.494680v1?rss=1">
<title>
<![CDATA[
Population geometry enables fast sampling in spiking neural networks 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.06.03.494680v1?rss=1"
</link>
<description><![CDATA[
For animals to navigate an uncertain world, their brains need to estimate uncertainty at the timescales of sensations and actions. Sampling-based algorithms afford a theoretically-grounded framework for probabilistic inference in neural circuits, but it remains unknown how one can implement fast sampling algorithms in biologically-plausible spiking networks. Here, we propose to leverage the population geometry, controlled by the neural code and the neural dynamics, to implement fast samplers in spiking neural networks. We first show that two classes of spiking samplers--efficient balanced spiking networks that simulate Langevin sampling, and networks with probabilistic spike rules that implement Metropolis-Hastings sampling--can be unified within a common framework. We then show that careful choice of population geometry, corresponding to the natural space of parameters, enables rapid inference of parameters drawn from strongly-correlated high-dimensional distributions in both networks. Our results suggest design prin-ciples for algorithms for sampling-based probabilistic inference in spiking neural networks, yielding potential inspiration for neuromorphic computing and testable predictions for neurobiology.
]]></description>
<dc:creator>Masset, P.</dc:creator>
<dc:creator>Zavatone-Veth, J. A.</dc:creator>
<dc:creator>Connor, J. P.</dc:creator>
<dc:creator>Murthy, V. N.</dc:creator>
<dc:creator>Pehlevan, C.</dc:creator>
<dc:date>2022-06-05</dc:date>
<dc:identifier>doi:10.1101/2022.06.03.494680</dc:identifier>
<dc:title><![CDATA[Population geometry enables fast sampling in spiking neural networks]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-06-05</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.08.23.505015v1?rss=1">
<title>
<![CDATA[
RTNet: A neural network that exhibits the signatures of human perceptual decision making 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.08.23.505015v1?rss=1"
</link>
<description><![CDATA[
Convolutional neural networks show promise as models of biological vision. However, their decision behavior, including the facts that they are deterministic and use equal number of computations for easy and difficult stimuli, differs markedly from human decision-making, thus limiting their applicability as models of human perceptual behavior. Here we develop a new neural network, RTNet, that generates stochastic decisions and human-like response time (RT) distributions. We further performed comprehensive tests that showed RTNet reproduces all foundational features of human accuracy, RT, and confidence and does so better than all current alternatives. To test RTNets ability to predict human behavior on novel images, we collected accuracy, RT, and confidence data from 60 human subjects performing a digit discrimination task. We found that the accuracy, RT, and confidence produced by RTNet for individual novel images correlated with the same quantities produced by human subjects. Critically, human subjects who were more similar to the average human performance were also found to be closer to RTNets predictions, suggesting that RTNet successfully captured average human behavior. Overall, RTNet is a promising model of human response times that exhibits the critical signatures of perceptual decision making.
]]></description>
<dc:creator>Rafiei, F.</dc:creator>
<dc:creator>Rahnev, D.</dc:creator>
<dc:date>2022-08-25</dc:date>
<dc:identifier>doi:10.1101/2022.08.23.505015</dc:identifier>
<dc:title><![CDATA[RTNet: A neural network that exhibits the signatures of human perceptual decision making]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-08-25</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.06.01.446561v1?rss=1">
<title>
<![CDATA[
Spatial and temporal autocorrelation weave human brain networks 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.06.01.446561v1?rss=1"
</link>
<description><![CDATA[
High-throughput experimental methods in neuroscience have led to an explosion of techniques for measuring complex interactions and multi-dimensional patterns. However, whether sophisticated measures of emergent phenomena can be traced back to simpler low-dimensional statistics is largely unknown. To explore this question, we examine resting state fMRI (rs-fMRI) data using complex topology measures from network neuroscience. We show that spatial and temporal autocorrelation are reliable statistics which explain numerous measures of network topology. Surrogate timeseries with subject-matched spatial and temporal autocorrelation capture nearly all reliable individual and regional variation in these topology measures. Network topology changes during aging are driven by spatial autocorrelation, and multiple serotonergic drugs causally induce the same topographic change in temporal autocorrelation. This reductionistic interpretation of widely-used complexity measures may help link them to neurobiology.
]]></description>
<dc:creator>Shinn, M.</dc:creator>
<dc:creator>Hu, A.</dc:creator>
<dc:creator>Turner, L.</dc:creator>
<dc:creator>Noble, S.</dc:creator>
<dc:creator>Achard, S.</dc:creator>
<dc:creator>Anticevic, A.</dc:creator>
<dc:creator>Scheinost, D.</dc:creator>
<dc:creator>Constable, R. T.</dc:creator>
<dc:creator>Lee, D.</dc:creator>
<dc:creator>Bullmore, E. T.</dc:creator>
<dc:creator>Murray, J. D.</dc:creator>
<dc:date>2021-06-01</dc:date>
<dc:identifier>doi:10.1101/2021.06.01.446561</dc:identifier>
<dc:title><![CDATA[Spatial and temporal autocorrelation weave human brain networks]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-06-01</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.04.08.487597v1?rss=1">
<title>
<![CDATA[
Flexible Intentions in the Posterior Parietal Cortex: An Active Inference Theory 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.04.08.487597v1?rss=1"
</link>
<description><![CDATA[
AO_SCPLOWBSTRACTC_SCPLOWWe present a normative computational theory of how neural circuitry may support visually-guided goal-directed actions in a dynamic environment. The model builds on Active Inference, in which perception and motor control signals are inferred through dynamic minimization of generalized prediction errors. The Posterior Parietal Cortex (PPC) is proposed to maintain constantly updated expectations, or beliefs over the environmental state, and by manipulating them through flexible intentions it is involved in dynamically generating goal-directed actions. In turn, the Dorsal Visual Stream (DVS) and the proprioceptive pathway implement generative models that translate the high-level belief into sensory-level predictions to infer targets, posture, and motor commands. A proof-of-concept agent embodying visual and proprioceptive sensors and an actuated upper limb was tested on target-reaching tasks. The agent behaved correctly under various conditions, including static and dynamic targets, different sensory feedbacks, sensory precisions, intention gains, and movement policies; limit conditions were individuated, too. Active Inference driven by dynamic and flexible intentions can thus support goal-directed behavior in constantly changing environments, and the PPC putatively hosts its core intention mechanism. More broadly, the study provides a normative basis for research on goal-directed behavior in end-to-end settings and further advances mechanistic theories of active biological systems.
]]></description>
<dc:creator>Priorelli, M.</dc:creator>
<dc:creator>Stoianov, I. P.</dc:creator>
<dc:date>2022-04-10</dc:date>
<dc:identifier>doi:10.1101/2022.04.08.487597</dc:identifier>
<dc:title><![CDATA[Flexible Intentions in the Posterior Parietal Cortex: An Active Inference Theory]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-04-10</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.08.06.503020v1?rss=1">
<title>
<![CDATA[
Humans account for cognitive costs when finding shortcuts: An information-theoretic analysis of navigation 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.08.06.503020v1?rss=1"
</link>
<description><![CDATA[
When faced with navigating back somewhere we have been before we might either retrace our steps or seek a shorter path. Both choices have costs. Here, we ask whether it is possible to characterize formally the choice of navigational plans as a bounded rational process that trades off the quality of the plan (e.g., its length) and the cognitive cost required to find and implement it. We analyze the navigation strategies of two groups of people that are firstly trained to follow a "default policy" taking a route in a virtual maze and then asked to navigate to various known goal destinations, either in the way they want ("Go To Goal") or by taking novel shortcuts ("Take Shortcut"). We address these wayfinding problems using InfoRL: an information-theoretic approach that formalizes the cognitive cost of devising a navigational plan, as the informational cost to deviate from a well-learned route (the "default policy"). In InfoRL, optimality refers to finding the best trade-off between route length and the amount of control information required to find it. We report five main findings. First, the navigational strategies automatically identified by InfoRL correspond closely to different routes (optimal or suboptimal) in the virtual reality map, which were annotated by hand in previous research. Second, people deliberate more in places where the value of investing cognitive resources (i.e., relevant goal information) is greater. Third, compared to the group of people who receive the "Go To Goal" instruction, those who receive the "Take Shortcut" instruction find shorter but less optimal solutions, reflecting the intrinsic difficulty of finding optimal shortcuts. Fourth, those who receive the "Go To Goal" instruction modulate flexibly their cognitive resources, depending on the benefits of finding the shortcut. Finally, we found a surprising amount of variability in the choice of navigational strategies and resource investment across participants. Taken together, these results illustrate the benefits of using InfoRL to address navigational planning problems from a bounded rational perspective.
]]></description>
<dc:creator>Lancia, G. L.</dc:creator>
<dc:creator>Eluchans, M.</dc:creator>
<dc:creator>D'Alessandro, M.</dc:creator>
<dc:creator>Spiers, H. J.</dc:creator>
<dc:creator>Pezzulo, G.</dc:creator>
<dc:date>2022-08-06</dc:date>
<dc:identifier>doi:10.1101/2022.08.06.503020</dc:identifier>
<dc:title><![CDATA[Humans account for cognitive costs when finding shortcuts: An information-theoretic analysis of navigation]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-08-06</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.06.10.495710v1?rss=1">
<title>
<![CDATA[
Off-manifold coding in visual cortex revealed by sleep 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.06.10.495710v1?rss=1"
</link>
<description><![CDATA[
Low-dimensional neural manifolds are controversial in part because it is unclear how to reconcile them with high-dimensional representations observed in areas such as primary visual cortex (V1). We addressed this by recording neuronal activity in V1 during slow-wave sleep, enabling us to identify internally-generated low-dimensional manifold structure and evaluate its role during visual processing. We found that movements and visual stimuli were both encoded in the "on-manifold" subspace preserved during sleep. However, only stimuli were encoded in the "off-manifold" subspace, which contains activity patterns that are less likely than chance to occur spontaneously during sleep. This off-manifold activity comprises sparse firing in neurons with the strongest low-dimensional modulation by movement, which paradoxically prevents movement-evoked activity from interfering with stimulus representations. These results reveal an unexpected link between low-dimensional dynamics and sparse coding, which together create a protected off-manifold coding space keeping high-dimensional representations separable from movement-evoked activity.
]]></description>
<dc:creator>de Oliveira, E. F.</dc:creator>
<dc:creator>Kim, S.</dc:creator>
<dc:creator>Qiu, T. S.</dc:creator>
<dc:creator>Peyrache, A.</dc:creator>
<dc:creator>Batista-Brito, R.</dc:creator>
<dc:creator>Sjulson, L.</dc:creator>
<dc:date>2022-06-13</dc:date>
<dc:identifier>doi:10.1101/2022.06.10.495710</dc:identifier>
<dc:title><![CDATA[Off-manifold coding in visual cortex revealed by sleep]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-06-13</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.07.09.499417v1?rss=1">
<title>
<![CDATA[
Parallel planning through an optimal neural subspace in motor cortex 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.07.09.499417v1?rss=1"
</link>
<description><![CDATA[
How do patterns of neural activity in motor cortex contribute to the planning of a movement? A recent theory developed for single movements proposes that motor cortex acts as a dynamical system whose initial state is optimized during the preparatory phase of the movement. This theory makes important yet untested predictions about preparatory dynamics in more complex behavioral settings. Here, we analyzed preparatory activity in non-human primates planning not one, but two movements simultaneously. As predicted by the theory, we found that parallel planning was achieved by adjusting preparatory activity within an optimal subspace to an intermediate state reflecting a tradeoff between the two movements. The theory quantitatively accounted for the relationship between this intermediate state and fluctuations in the animals behavior down at the trial level. These results uncover a simple mechanism for planning multiple movements in parallel, and further point to motor planning as a controlled dynamical process.
]]></description>
<dc:creator>Meirhaeghe, N.</dc:creator>
<dc:creator>Riehle, A.</dc:creator>
<dc:creator>Brochier, T.</dc:creator>
<dc:date>2022-07-10</dc:date>
<dc:identifier>doi:10.1101/2022.07.09.499417</dc:identifier>
<dc:title><![CDATA[Parallel planning through an optimal neural subspace in motor cortex]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-07-10</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.08.02.502503v1?rss=1">
<title>
<![CDATA[
Direct Speech Reconstruction from Sensorimotor Brain Activity with Optimized Deep Learning Models 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.08.02.502503v1?rss=1"
</link>
<description><![CDATA[
Development of brain-computer interface (BCI) technology is key for enabling communication in individuals who have lost the faculty of speech due to severe motor paralysis. A BCI control strategy that is gaining attention employs speech decoding from neural data. Recent studies have shown that a combination of direct neural recordings and advanced computational models can provide promising results. Understanding which decoding strategies deliver best and directly applicable results is crucial for advancing the field. In this paper, we optimized and validated a decoding approach based on speech reconstruction directly from high-density electrocorticography recordings from sensorimotor cortex during a speech production task. We show that 1) dedicated machine learning optimization of reconstruction models is key for achieving the best reconstruction performance; 2) individual word decoding in reconstructed speech achieves 92-100% accuracy (chance level is 8%); 3) direct reconstruction from sensorimotor brain activity produces intelligible speech. These results underline the need for model optimization in achieving best speech decoding results and highlight the potential that reconstruction-based speech decoding from sensorimotor cortex can offer for development of next-generation BCI technology for communication.
]]></description>
<dc:creator>Berezutskaya, J.</dc:creator>
<dc:creator>Freudenburg, Z. V.</dc:creator>
<dc:creator>Vansteensel, M. J.</dc:creator>
<dc:creator>Aarnoutse, E. J.</dc:creator>
<dc:creator>Ramsey, N. F.</dc:creator>
<dc:creator>van Gerven, M. A. J.</dc:creator>
<dc:date>2022-08-04</dc:date>
<dc:identifier>doi:10.1101/2022.08.02.502503</dc:identifier>
<dc:title><![CDATA[Direct Speech Reconstruction from Sensorimotor Brain Activity with Optimized Deep Learning Models]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-08-04</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2020.09.22.308981v1?rss=1">
<title>
<![CDATA[
A synergistic core for human brain evolution and cognition 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2020.09.22.308981v1?rss=1"
</link>
<description><![CDATA[
A fundamental question in neuroscience is how brain organisation gives rise to humans unique cognitive abilities. Although complex cognition is widely assumed to rely on frontal and parietal brain regions, the underlying mechanisms remain elusive: current approaches are unable to disentangle different forms of information processing in the brain. Here, we introduce a powerful framework to identify synergistic and redundant contributions to neural information processing and cognition. Leveraging multimodal data including functional MRI, PET, cytoarchitectonics and genetics, we reveal that synergistic interactions are the fundamental drivers of complex human cognition. Whereas redundant information dominates sensorimotor areas, synergistic activity is closely associated with the brains prefrontal-parietal and default networks; furthermore, meta-analytic results demonstrate a close relationship between high-level cognitive tasks and synergistic information. From an evolutionary perspective, the human brain exhibits higher prevalence of synergistic information than non-human primates. At the macroscale, we demonstrate that high-synergy regions underwent the highest degree of evolutionary cortical expansion. At the microscale, human-accelerated genes promote synergistic interactions by enhancing synaptic transmission. These convergent results provide critical insights that synergistic neural interactions underlie the evolution and functioning of humans sophisticated cognitive abilities, and demonstrate the power of our widely applicable information decomposition framework.
]]></description>
<dc:creator>Luppi, A. I.</dc:creator>
<dc:creator>Mediano, P. A. M.</dc:creator>
<dc:creator>Rosas, F. E.</dc:creator>
<dc:creator>Holland, N.</dc:creator>
<dc:creator>Fryer, T. D.</dc:creator>
<dc:creator>O'Brien, J. T.</dc:creator>
<dc:creator>Rowe, J. B.</dc:creator>
<dc:creator>Menon, D. K.</dc:creator>
<dc:creator>Bor, D.</dc:creator>
<dc:creator>Stamatakis, E. A.</dc:creator>
<dc:date>2020-09-22</dc:date>
<dc:identifier>doi:10.1101/2020.09.22.308981</dc:identifier>
<dc:title><![CDATA[A synergistic core for human brain evolution and cognition]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2020-09-22</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2021.06.25.449734v1?rss=1">
<title>
<![CDATA[
Coupling of pupil- and neuronal population dynamics reveals diverse influences of arousal on cortical processing 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2021.06.25.449734v1?rss=1"
</link>
<description><![CDATA[
Fluctuations in arousal, controlled by subcortical neuromodulatory systems, continuously shape cortical state, with profound consequences for information processing. Yet, how arousal signals influence cortical population activity in detail has so far only been characterized for a few selected brain regions. Traditional accounts conceptualize arousal as a homogeneous modulator of neural population activity across the cerebral cortex. Recent insights, however, point to a higher specificity of arousal effects on different components of neural activity and across cortical regions. Here, we provide a comprehensive account of the relationships between fluctuations in arousal and neuronal population activity across the human brain. Exploiting the established link between pupil size and central arousal systems, we performed concurrent magnetoencephalographic (MEG) and pupillographic recordings in a large number of participants, pooled across three laboratories. We found a cascade of effects relative to the peak timing of spontaneous pupil dilations: Decreases in low-frequency (2-8 Hz) activity in temporal and lateral frontal cortex, followed by increased high-frequency (>64 Hz) activity in mid-frontal regions, followed by monotonic and inverted-U relationships with intermediate frequency-range activity (8-32 Hz) in occipito-parietal regions. Pupil-linked arousal also coincided with widespread changes in the structure of the aperiodic component of cortical population activity, indicative of changes in the excitation-inhibition balance in underlying microcircuits. Our results provide a novel basis for studying the arousal modulation of cognitive computations in cortical circuits.
]]></description>
<dc:creator>Pfeffer, T.</dc:creator>
<dc:creator>Keitel, C.</dc:creator>
<dc:creator>Kluger, D. S.</dc:creator>
<dc:creator>Keitel, A.</dc:creator>
<dc:creator>Russmann, A.</dc:creator>
<dc:creator>Thut, G.</dc:creator>
<dc:creator>Donner, T. H.</dc:creator>
<dc:creator>Gross, J.</dc:creator>
<dc:date>2021-06-25</dc:date>
<dc:identifier>doi:10.1101/2021.06.25.449734</dc:identifier>
<dc:title><![CDATA[Coupling of pupil- and neuronal population dynamics reveals diverse influences of arousal on cortical processing]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2021-06-25</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.08.11.503296v1?rss=1">
<title>
<![CDATA[
Confidence of probabilistic predictions modulates the cortical response to pain 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.08.11.503296v1?rss=1"
</link>
<description><![CDATA[
Pain typically evolves over time and the brain needs to learn this temporal evolution to predict how pain is likely to change in the future and orient behavior. This process is termed temporal statistical learning (TSL). Recently, it has been shown that TSL for pain sequences can be achieved using optimal Bayesian inference, which is encoded in somatosensory processing regions. Here, we investigate whether the confidence of these probabilistic predictions modulates the EEG response to noxious stimuli, using a TSL task. Confidence measures the uncertainty about the probabilistic prediction, irrespective of its actual outcome. Bayesian models dictate that the confidence about probabilistic predictions should be integrated with incoming inputs and weight learning, such that it modulates the early components of the EEG responses to noxious stimuli, and this should be captured by a negative correlation: when confidence is higher, the early neural responses are smaller as the brain relies more on expectations/predictions and less on sensory inputs (and vice versa). We show that participants were able to predict the sequence transition probabilities using Bayesian inference, with some forgetting. Then, we find that the confidence of these probabilistic predictions was negatively associated with the amplitude of the N2 and P2 components of the Vertex Potential: the more confident were participants about their predictions, the smaller was the Vertex Potential. These results confirm key predictions of a Bayesian learning model and clarify the functional significance of the early EEG responses to nociceptive stimuli, as being implicated in confidence-weighted statistical learning.

SIGNIFICANCEThe functional significance of EEG responses to pain has long been debated because of their dramatic variability. This study indicates that such variability can be partly related to the confidence of probabilistic predictions emerging from sequences of pain inputs. The confidence of pain predictions is negatively associated with the cortical EEG responses to pain. This indicates that the brain relies less on sensory inputs when confidence is higher and shows us that confidence-weighted statistical learning modulates the cortical response to pain.
]]></description>
<dc:creator>Mulders, D.</dc:creator>
<dc:creator>Seymour, B.</dc:creator>
<dc:creator>Mouraux, A.</dc:creator>
<dc:creator>Mancini, F.</dc:creator>
<dc:date>2022-08-15</dc:date>
<dc:identifier>doi:10.1101/2022.08.11.503296</dc:identifier>
<dc:title><![CDATA[Confidence of probabilistic predictions modulates the cortical response to pain]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-08-15</prism:publicationDate>
<prism:section></prism:section>
</item>
<item rdf:about="https://biorxiv.org/cgi/content/short/2022.09.21.508935v1?rss=1">
<title>
<![CDATA[
Irregular optogenetic stimulation waveforms can induce naturalistic patterns of hippocampal spectral activity 
]]>
</title>
<link>
https://biorxiv.org/cgi/content/short/2022.09.21.508935v1?rss=1"
</link>
<description><![CDATA[
IntroductionBrain stimulation is a fundamental and effective therapy for neurological diseases including Parkinsons disease, essential tremor, and epilepsy. One key challenge in delivering effective brain stimulation is identifying the stimulation parameters, such as the amplitude, frequency, contact configuration, and pulse width, that induce an optimal change in symptoms, behavior, or neural activity. Most clinical and translational studies use constant-frequency pulses of stimulation, but stimulation with irregular pulse patterns or non-pulsatile waveforms might induce unique changes in neural activity that could enable better therapeutic responses. Here, we comprehensively evaluate several optogenetic stimulation waveforms, report their differing effects on hippocampal spectral activity, and compare these induced effects to activity recorded during natural behavior.

MethodsSprague-Dawley rats were prepared for pan-neuronal excitatory optogenetic stimulation of the medial septum (hSyn-ChR2) and 16-channel microelectrode recording in CA1 and CA3 layers of the hippocampus. We performed grid and random sampling of the parameters comprising several stimulation waveforms, including standard pulse, nested pulse, sinusoid, double sinusoid, and Poisson pulse waveforms.

ResultsWe comprehensively report the effects of changing stimulation parameters in these parameter spaces on two key biomarkers of hippocampal function, theta (4-10 Hz) and gamma (32-50 Hz) power. Similarly, robust excitation of hippocampal gamma power was observed across all waveforms, whereas no set of stimulation parameters was sufficient to consistently increase power in the theta band beyond baseline levels of activity (despite the prominent role of the medial septum in pacing hippocampal theta oscillations). Using a manifold learning algorithm to compare high-dimensional neural activity, we show that irregular stimulation patterns produce differing effects with respect to multi-band patterns of activity and can induce activity patterns that more closely resemble activity recorded during natural behavior than conventional parameters.

ConclusionOur counter-intuitive findings - that stimulation of the medial septum ubiquitously does not increase hippocampal theta power, and that different waveforms have similar effects on single power bands - contradict recent trends in brain stimulation research, necessitating greater caution and fewer mechanistic assumptions as to how a given stimulation target or waveform will modulate a neurophysiological biomarker of disease. We also reveal that irregular stimulation patterns can have biomimetic utility, promoting their exploration in medical applications where inducing a particular activity pattern can have therapeutic benefit. Last, we demonstrate a scalable data-driven analysis strategy that can make the discovery of such physiologically informed temporal stimulation patterns more empirically tractable in translational settings.
]]></description>
<dc:creator>Cole, E. R.</dc:creator>
<dc:creator>Eggers, T. E.</dc:creator>
<dc:creator>Weiss, D.</dc:creator>
<dc:creator>Connolly, M. J.</dc:creator>
<dc:creator>Gombolay, M. C.</dc:creator>
<dc:creator>Laxpati, N. G.</dc:creator>
<dc:creator>Gross, R. E.</dc:creator>
<dc:date>2022-09-22</dc:date>
<dc:identifier>doi:10.1101/2022.09.21.508935</dc:identifier>
<dc:title><![CDATA[Irregular optogenetic stimulation waveforms can induce naturalistic patterns of hippocampal spectral activity]]></dc:title>
<dc:publisher>Cold Spring Harbor Laboratory Press</dc:publisher>
<prism:publicationDate>2022-09-22</prism:publicationDate>
<prism:section></prism:section>
</item>
</rdf:RDF>
