CNS 2012 Atlanta/Decatur: Tutorials Program
All tutorials will be held on July 21st at the Agnes Scott College for Women, Bullock (Science Center) building.
Confirmed tutorials
9:00-12:00 and 13:30-16:30, Room 102 W
Gaute T. Einevoll (Norwegian University of Life Sciences, Ås) Szymon Łęski (Nencki Institute of Experimental Biology, Warsaw) Espen Hagen (Norwegian University of Life Sciences, Ås)
lecture material
9:00-12:00 and 13:30-16:30, Room 112 W
Ruben Moreno-Bote (Foundation Sant Joan de Déu, Barcelona) Moritz Helias (Research Center Jülich)
lecture material: part 1, part 2
9:00-12:00 and 13:30-16:30, Room 210 E
Cengiz Gunay (Emory University) Anca Doloc-Mihu (Emory University) Vladislav Sekulic (University of Toronto)
lecture material
9:00-12:00, Room 209 W B
Duane Nykamp (University of Minnesota)
lecture material
13:30-16:30, Room 209 W A
Andrew P. Davison (UNIC, CNRS, Gif sur Yvette)
lecture material
9:00-12:00, Room G9
Ted Carnevale (Yale University School of Medicine, New Haven) William W. Lytton (SUNY Downstate Medical Center, NY)
lecture material: part 1, part 2
13:30-16:30, Room 209 W B
Laurence C. Jayet Bray (University of Nevada, Reno)
Roger V. Hoang (University of Nevada, Reno) Frederick C. Harris, Jr. (University of Nevada, Reno)
lecture material
Tutorial abstracts
T1: Modeling and interpretation of extracellular potentials
Gaute T. Einevoll (Norwegian University of Life Sciences, Ås) Szymon Łęski (Nencki Institute of Experimental Biology, Warsaw) Espen Hagen (Norwegian University of Life Sciences, Ås)
While extracellular electrical recordings have been the workhorse in electrophysiology, the interpretation of such recordings is not trivial. The recorded extracellular potentials in general stem from a complicated sum of contributions from all transmembrane currents of the neurons in the vicinity of the electrode contact. The duration of spikes, the extracellular signatures of neuronal action potentials, is so short that the high-frequency part of the recorded signal, the multi-unit activity (MUA), often can be sorted into spiking contributions from the individual neurons surrounding the electrode. However, no such simplifying feature aids us in the interpretation of the low-frequency part, the local field potential (LFP). To take a full advantage of the new generation of silicon-based multielectrodes recording from tens, hundreds or thousands of positions simultaneously, we thus need to develop new data analysis methods grounded in the underlying biophysics. This is the topic of the present tutorial.
In the first part of this tutorial we will go through
- the biophysics of extracellular recordings in the brain,
- a scheme for biophysically detailed modeling of extracellular potentials and the application to modeling single spikes [1-3], MUA [4] and LFP, both from single neurons [5] and populations of neurons [4,6], and
- methods for
-
estimation of current source density [7] from LFP data, such as the iCSD [8-10] and kCSD methods [11], and
-
decomposition of recorded signals in cortex into contributions from various laminar populations, i.e., (i) laminar population analysis (LPA) [12] based on joint modeling of LFP and MUA, and (ii) a novel scheme using LFP and known constraints on the synaptic connections [13]
In the second part the participants will get demonstrations and hands-on experience with
- LFPy (compneuro.umb.no/LFPy), a versatile tool based on Python and the simulation program NEURON [14] (www.neuron.yale.edu) for calculation of extracellular potentials around neurons, and
- tools for iCSD analysis, in particular,
[1] G Holt, C Koch, J Comp Neurosci 6:169 (1999) [2] J Gold et al, J Neurophysiol 95:3113 (2006) [3] KH Pettersen and GT Einevoll, Biophys J 94:784 (2008) [4] KH Pettersen et al, J Comp Neurosci 24:291 (2008) [5] H Lindén et al, J Comp Neurosci 29: 423 (2010) [6] H Lindén et al, Neuron 72:859 (2011) [7] C Nicholson and JA Freeman, J Neurophsyiol 38:356 (1975) [8] KH Pettersen et al, J Neurosci Meth 154:116 (2006) [9] S Łęski et al, Neuroinform 5:207 (2007) [10] S Łęski et al, Neuroinform 9:401 (2011) [11] J Potworowski et al, Neural Comp 24:541 (2012) [12] GT Einevoll et al, J Neurophysiol 97:2174 (2007) [13] SL Gratiy et al, Front Neuroinf 5:32 (2011) [14] ML Hines et al, Front Neuroinf 3:1 (2009)
T2: Theory of correlation transfer and correlation structure in recurrent networks
Ruben Moreno-Bote (Foundation Sant Joan de Déu, Barcelona) Moritz Helias (Research Center Jülich)
In the first part, we will study correlations arising from pairs of neurons sharing common fluctuations and/or inputs. Using integrate-and-fire neurons, we will show how to compute the firing rate, auto-correlation and cross-correlation functions of the output spike trains. The transfer function of the output correlations given the inputs correlations will be discussed. We will show that the output correlations are generally weaker than the input correlations [Moreno-Bote and Parga, 2006], that the shape of the cross-correlation functions depends on the working regime of the neuron, and that the output correlations strongly depend on the output firing rate of the neurons [de la Rocha et al, 2007]. We will study generalizations of these results when the pair of neurons is reciprocally connected.
In the second part, we will consider correlations in recurrent random networks. Using a binary neuron model [Ginzburg & Sompolinsky 1994], we explain how mean-field theory determines the stationary state and how network-generated noise linearizes the single neuron response. The resulting linear equation for the fluctuations in recurrent networks is then solved to obtain the correlation structure in balanced random networks. We discuss two different points of view of the recently reported active suppression of correlations in balanced networks by fast tracking [Renart 2010] and by negative feedback [Tetzlaff 2010]. Finally, we consider extensions of the theory of correlations of linear Poisson spiking models [Hawkes 1971] to the leaky integrate-and-fire model and present a unifying view of linearized theories of correlations [Helias 2011].
At last, we will revisit the important question of how correlations affect information and vice-versa [Zohary et al, 1994] in neuronal circuits, showing novel results about information content in recurrent networks of integrate-and-fire neurons [Moreno-Bote and Pouget, Cosyne abstracts, 2011].
- Ginzburg & Sompolinsky (1994), Theory of correlations in stochastic neural networks, PRE 50:3171-3190
- Renart et al. (2010), The Asynchronous State in Cortical Circuits, Science 327(5965):587-590
- Tetzlaff et al. (2010), Decorrelation of low-frequency neural activity by inhibitory feedback, BMC Neuroscience 11(Suppl 1):O11
- Hawkes (1971), Point Spectra of Some Mutually Exciting Point Processes, Journal of the Royal Statistical Society Series B 33(3):438-443
- Helias et al. (2011), Towards a unified theory of correlations in recurrent neural networks, BMC Neuroscience 12(Suppl 1):P73
- Shadlen & Newsome (1998), The variable discharge of cortical neurons: implications for connectivity, computation, and information coding, J Neurosci 18:3870-96
- Moreno-Bote & Parga (2006), Auto- and crosscorrelograms for the spike response of leaky integrate-and-fire neurons with slow synapses, PRL 96:028101
- de la Rocha et al. (2007), Correlation between neural spike trains increases with firing rate, Nature 448:802-6
- Zohary et al. (1994), Correlated Neuronal Discharge Rate and Its Implications for Psychophysical Performance, Nature 370:140-14Complex networks and graph theoretical concepts
T3: Parameter Search for Neural Spiking Activity: Creation and Analysis of Simulation Databases
Cengiz Gunay (Emory University) Anca Doloc-Mihu (Emory University) Vladislav Sekulic (University of Toronto)
Parameter tuning of model neurons to achieve biologically realistic spiking patterns is a non-trivial task, for which several methods have been proposed. One method is to perform a systematic search through a very large parameter space (with thousands to millions of model instances), and then categorize spiking neural activity characteristics in a database [1-8]. This technique is of key importance because of the existence of multiple parameter sets that give similar dynamics, both experimentally and in silico -- i.e. there is no single "correct" model. In this tutorial, we will teach some of the implementations of this method (e.g., the PANDORA Matlab Toolbox [6,11]) used in recent projects for tuning models of rat globus pallidus neurons [5,9], lobster pyloric network calcium sensors [7,10], leech heart interneurons [8] and hippocampal O-LM interneurons (Skinner Lab, TWRI/UHN and Univ. Toronto).
The tutorial will be composed of three parts that will include the following topics:
1. Running simulations for systematic parameter search
- Model complexity versus simulation time trade off (single compartment versus full morphology; how many channels to include?)
- Working with Hodgkin-Huxley type ion channels and morphological reconstructions (e.g., determining dendritic distributions of Ih channels in hippocampal O-LM interneurons)
- Determining ranges for channel, synapse, and morphology parameters
- Setting up simulations and storage to accommodate a large number of output files
- Examples using GENESIS, NEURON and custom C/C++ simulators
- Control of simulations on high-performance clusters
- Troubleshooting common pitfalls
2. Extracting of activity characteristics and constructing of databases
- Measuring spike shape, firing rate and bursting properties
- Analyzing large number of simulation output files
- Standardizing feature extraction and error handling
- Examples using Matlab, Java and shell scripting languages
3. Analysis of information in databases
- Calculating histograms, correlations, etc.
- Ranking simulations based on similarity to recordings
- Multivariate parameter analysis
- Data mining methods
- Visualization (e.g., dimensional stacking)
- Higher order methods (e.g., factor and principal component analyses)
Each of these parts will have time allocated for Q&A and interaction with the audience. If participants bring a laptop pre-loaded with Matlab, they can follow some of our examples.
References:
[1] Prinz AA, Billimoria CP, and Marder E (2003). Alternative to hand-tuning conductance-based models: Construction and analysis of databases of model neurons. J Neurophysiol, 90:3998–4015. [2] Prinz AA, Bucher D, and Marder E (2004). Similar network activity from disparate circuit parameters. Nat. Neurosci. 7(12): 1345-1352. [3] Calin-Jageman RJ, Tunstall MJ, Mensh BD, Katz PS, Frost WN (2007) Parameter space analysis suggests multi-site plasticity contributes to motor pattern initiation in Tritonia. J Neurophysiol 98:2382–2398. [4] Lytton WW, Omurtag A (2007). Tonic-clonic transitions in computer simulation. J Clin Neurophysiol. 24(2):175-81. [5] Günay C, Edgerton JR, and Jaeger D (2008). Channel density distributions explain spiking variability in the globus pallidus: A combined physiology and computer simulation database approach. J. Neurosci., 28(30): 7476–91. [6] Günay C, Edgerton JR, Li S, Sangrey T, Prinz AA, and Jaeger D (2009). Database analysis of simulated and recorded electrophysiological datasets with PANDORA’s Toolbox. Neuroinformatics, 7(2):93-111. [7] Günay C, and Prinz AA (2010). Model calcium sensors for network homeostasis: Sensor and readout parameter analysis from a database of model neuronal networks. J Neurosci, 30:1686– 1698. [8] Doloc-Mihu A, and Calabrese RL (2011). A database of computational models of a half-center oscillator for analyzing how neuronal parameters influence network activity. J Biol Phys 37(3): 263-283.
Model and Software Links:
[9] Rat globus pallidus neuron model (https://senselab.med.yale.edu/modeldb/ShowModel.asp?model=114639) [10] Lobster stomatogastric ganglion pyloric network model (http://senselab.med.yale.edu/ModelDB/showmodel.asp?model=144387) [11] PANDORA Matlab Toolbox (http://software.incf.org/software/pandora)
T4: Complex networks and graph theoretical concepts
Duane Nykamp (University of Minnesota)
Increasing evidence suggests a structure in the brain's networks that isn't well described by standard random graph models. Such findings open up the debate whether or not the networks in the brain are "small world" or "scale-free," contain central well-connected "hubs," are highly "clustered" or "modular." But, how does ones interpret the significance of this supposedly "non-random" structure? Can we determine how such network features influence the dynamics of neuronal networks? In this tutorial, we will introduce basic graph theoretical concepts and their application to complex networks. We will examine experimental findings about network structure in the brain and discuss the potential of the graph theoretical framework on shedding light on the function of neural circuits.
T5: Workflows for reproducible research in computational neuroscience
Andrew P. Davison (UNIC, CNRS, Gif sur Yvette)
Reliably repeating previous experiments, one of the cornerstones of the scientific method, ought to be easy in computational neuroscience, given that computers are deterministic, not suffering from the problems of inter-subject and trial-to-trial variability that make reproduction of biological experiments so challenging. In general, however, it is not at all easy, especially when running someone else's code, or when months or years have elapsed since the original experiment.
The failure to routinely achieve replicability in computational neuroscience (probably in computational science in general, see Donoho et al., 2009 [1]) has important implications for both the credibility of the field and for its rate of progress (since reuse of existing code is fundamental to good software engineering). For individual researchers, as the example of ModelDB has shown, sharing reliable code enhances reputation and leads to increased impact.
In this tutorial we will identify the reasons for the difficulties often encountered in reproducing computational experiments, and some best practices for making our work more reliable and more easily reproducible by ourselves and others (without adding a huge burden to either our day-to-day research or the publication process).
We will then cover a number of tools that can facilitate a reproducible workflow and allow tracking the provenance of results from a published article back through intermediate analysis stages to the original models and simulations. The tools that will be covered include Git [2], Mercurial [3], Sumatra [4] and VisTrails [5].
[1] Donoho et al. (2009) 15 Years of Reproducible Research in Computational Harmonic Analysis, Computing in Science and Engineering 11: 8-18. doi:10.1109/MCSE.2009.15 [2] http://git-scm.com/ [3] http://mercurial.selenic.com/ [4] http://neuralensemble.org/sumatra [5] http://www.vistrails.org/
T6: The finer points of modeling (with NEURON)
Ted Carnevale (Yale University School of Medicine, New Haven) William W. Lytton (SUNY Downstate Medical Center, NY)
This tutorial will focus on practical aspects of constructing and using models of cells and networks that will help modelers improve their productivity and the quality of their models. We will cover topics that include efficient strategies for specifying model properties, tactics and tools for debugging, and what we judge to be important, if sometimes overlooked, aspects of hoc, Python, and NMODL. This is not an "introductory" course--attendees are assumed to be familiar with using hoc or Python to develop NEURON models of cells or networks. Applicants with a strong interest in specific questions or topics are encouraged to email suggestions to ted dot carnevale at yale dot edu before June 16, 2012.
T7: Real-time simulation of large-scale neural models using the NeoCortical Simulator (NCS)
Laurence C. Jayet Bray, Roger V. Hoang, Frederick C. Harris, Jr. Brain Computation Laboratory, Dept. of Computer Science & Engineering, University of Nevada, Reno
This tutorial will mostly concentrate on how to design large-scale models using the NeoCortical Simulator (NCS), and to run simulations in real-time.
This is an introductory course for attendees who wish to learn a new simulation program, which emphasizes the construction, the simulation, and the analysis of current brain models. Additional information will be given on distribution capabilities, levels of abstraction, software and hardware platforms, possible real-time virtual robotic applications, and how does NCS differ from other simulation programs.
Current research has demonstrated a recent software optimization and hardware improvements, which have helped increase simulation speed and ameliorate the robustness of complex brain models.
NCS requires NO computer programming experience.
Applicants with further questions are welcome to contact Laurence at ljayet at cse dot unr dot edu.
|