Search result: Catalogue data in Autumn Semester 2022

Neural Systems and Computation Master Information
Core Courses
Compulsory Core Courses
NumberTitleTypeECTSHoursLecturers
227-1039-00LBasics of Instrumentation, Measurement, and Analysis (University of Zurich)
No enrolment to this course at ETH Zurich. Book the corresponding module directly at UZH as an incoming student.
UZH Module Code: INI502

Mind the enrolment deadlines at UZH:
Link

Registration in this class requires the permission of the instructors. Class size will be limited to available lab spots.
Preference is given to students that require this class as part of their major.
O4 credits9SS.‑C. Liu, T. Delbrück, R. Hahnloser, G. Indiveri, V. Mante, P. Pyk, D. Scaramuzza, W. von der Behrens
AbstractExperimental data are always as good as the instrumentation and measurement, but never any better. This course provides the very basics of instrumentation relevant to neurophysiology and neuromorphic engineering, it consists of two parts: a common introductory part involving analog signals and their acquisition (Part I), and a more specialized second part (Part II).
ObjectiveThe goal of Part I is to provide a general introduction to the signal acquisition process. Students are familiarized with basic lab equipment such as oscilloscopes, function generators, and data acquisition devices. Different electrical signals are generated, visualized, filtered, digitized, and analyzed using Matlab (Mathworks Inc.) or Labview (National Instruments).

In Part II, the students are divided into small groups to work on individual measurement projects according to availability and interest. Students single-handedly solve a measurement task, making use of their basic knowledge acquired in the first part. Various signal sources will be provided.
Prerequisites / NoticeFor each part, students must hand in a written report and present a live demonstration of their measurement setup to the respective supervisor. The supervisor of Part I is the teaching assistant, and the supervisor of Part II is task specific. Admission to Part II is conditional on completion of Part I (report + live demonstration).

Reports must contain detailed descriptions of the measurement goal, the measurement procedure, and the measurement outcome. Either confidence or significance of measurements must be provided. Acquisition and analysis software must be documented.
227-1031-00LJournal Club (University of Zurich)
No enrolment to this course at ETH Zurich. Book the corresponding module directly at UZH as an incoming student.
UZH Module Code: INI702

Mind the enrolment deadlines at UZH:
Link
O2 credits1SG. Indiveri
AbstractThe Neuroinformatics Journal club is a weekly meeting during which students present current research papers.
The presentation last from 30 to 60 Minutes and is followed by a general discussion.
ObjectiveThe Neuroinformatics Journal club aims to train students to present cutting-edge research clealry and efficiently. It leads students to learn about current topics in neurosciences and neuroinformatics, to search the relevant literature and to critically and scholarly appraise published papers. The students learn to present complex concepts and answer critical questions.
ContentRelevant current papers in neurosciences and neuroinformatics are covered.
227-1043-00LNeuroinformatics - Colloquia (University of Zurich)
No enrolment to this course at ETH Zurich. Book the corresponding module directly at UZH as an incoming student.
UZH Module Code: INI701

Mind the enrolment deadlines at UZH:
Link
Z0 credits1KS.‑C. Liu, R. Hahnloser, V. Mante
AbstractThe colloquium in Neuroinformatics is a series of lectures given by invited experts. The lecture topics reflect the current themes in neurobiology and neuromorphic engineering that are relevant for our Institute.
ObjectiveThe goal of these talks is to provide insight into recent research results. The talks are not meant for the general public, but really aimed at specialists in the field.
ContentThe topics depend heavily on the invited speakers, and thus change from week to week.
All topics concern neural computation and their implementation in biological or artificial systems.
227-1045-00LReadings in Neuroinformatics (University of Zurich)
No enrolment to this course at ETH Zurich. Book the corresponding module directly at UZH as an incoming student.
UZH Module Code: INI431

Mind the enrolment deadlines at UZH:
Link
O3 credits1SW. von der Behrens, R. Hahnloser, S.‑C. Liu, V. Mante
AbstractThirteen major areas of research have been selected, which cover the key concepts that have led to our current ideas of how the nervous system is built and functions. We will read both original papers and explore the conceptual the links between them and discuss the 'sociology' of science, the pursuit of basic science questions over a century of research."
ObjectiveIt is commonplace that scientists rarely cite literature that is older than 10 years and when they do, they usually cite one paper that serves as the representative for a larger body of work that has long since been incorporated anonymously in textbooks. Even worse, many authors have not even read the papers they cite in their own publications. This course, ‘Foundations of Neuroscience’ is one antidote. Thirteen major areas of research have been selected. They cover the key concepts that have led to our current ideas of how the nervous system is built and functions. Unusually, we will explore these areas of research by reading the original publications, instead of reading a digested summary from a textbook or review. By doing this, we will learn how the discoveries were made, what instrumentation was used, how the scientists interpreted their own findings, and how their work, often over many decades and linked together with related findings from many different scientists, generate the current views of mechanism and structure of the nervous system. We will read different original papers and explore the conceptual links between them and discuss the ‘sociology’ of science. We will also explore the personalities of the scientists and the context in which they made their seminal discoveries. Each week , course members will be given original papers to read for homework and they will write a short abstract for each paper. We will then meet weekly with the course leader and an assistant for an hour-or-so long interactive seminar. An intimate knowledge of the papers will be assumed so that the discussion does not center simply on an explication of the contents of the papers. Assessment will be in the form of a written exam where students will be given a paper and asked to write a short abstract of its contents.
ContentIt is commonplace that scientists rarely cite literature that is older than 10 years and when they do, they usually cite one paper that serves as the representative for a larger body of work that has long since been incorporated anonymously in textbooks. Even worse, many authors have not even read the papers they cite in their own publications. This course, ‘Foundations of Neuroscience’ is one antidote. Thirteen major areas of research have been selected. They cover the key concepts that have led to our current ideas of how the nervous system is built and functions. Unusually, we will explore these areas of research by reading the original publications, instead of reading a digested summary from a textbook or review. By doing this, we will learn how the discoveries were made, what instrumentation was used, how the scientists interpreted their own findings, and how their work, often over many decades and linked together with related findings from many different scientists, generate the current views of mechanism and structure of the nervous system. We will read different original papers and explore the conceptual links between them and discuss the ‘sociology’ of science. We will also explore the personalities of the scientists and the context in which they made their seminal discoveries. Each week , course members will be given original papers to read for homework and they will write a short abstract for each paper. We will then meet weekly with the course leader and an assistant for an hour-or-so long interactive seminar. An intimate knowledge of the papers will be assumed so that the discussion does not center simply on an explication of the contents of the papers. Assessment will be in the form of a written exam where students will be given a paper and asked to write a short abstract of its contents.
Elective Core Courses
Systems Neurosciences
NumberTitleTypeECTSHoursLecturers
227-0421-00LDeep Learning in Artificial and Biological Neuronal NetworksW4 credits3GB. Grewe
AbstractDeep-Learning (DL) a brain-inspired weak for of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. However, DL is far from being understood and investigating learning in biological networks might serve again as a compelling inspiration to think differently about state-of-the-art ANN training methods.
ObjectiveThe main goal of this lecture is to provide a comprehensive overview into the learning principles neuronal networks as well as to introduce a diverse skill set (e.g. simulating a spiking neuronal network) that is required to understand learning in large, hierarchical neuronal networks. To achieve this the lectures and exercises will merge ideas, concepts and methods from machine learning and neuroscience. These will include training basic ANNs, simulating spiking neuronal networks as well as being able to read and understand the main ideas presented in today’s neuroscience papers.
After this course students will be able to:
- read and understand the main ideas and methods that are presented in today’s neuroscience papers
- explain the basic ideas and concepts of plasticity in the mammalian brain
- implement alternative ANN learning algorithms to ‘error backpropagation’ in order to train deep neuronal networks.
- use a diverse set of ANN regularization methods to improve learning
- simulate spiking neuronal networks that learn simple (e.g. digit classification) tasks in a supervised manner.
ContentDeep-learning a brain-inspired weak form of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. The origins of deep hierarchical learning can be traced back to early neuroscience research by Hubel and Wiesel in the 1960s, who first described the neuronal processing of visual inputs in the mammalian neocortex. Similar to their neocortical counterparts ANNs seem to learn by interpreting and structuring the data provided by the external world. However, while on specific tasks such as playing (video) games deep ANNs outperform humans (Minh et al, 2015, Silver et al., 2018), ANNs are still not performing on par when it comes to recognizing actions in movie data and their ability to act as generalizable problem solvers is still far behind of what the human brain seems to achieve effortlessly. Moreover, biological neuronal networks can learn far more effectively with fewer training examples, they achieve a much higher performance in recognizing complex patterns in time series data (e.g. recognizing actions in movies), they dynamically adapt to new tasks without losing performance and they achieve unmatched performance to detect and integrate out-of-domain data examples (data they have not been trained with). In other words, many of the big challenges and unknowns that have emerged in the field of deep learning over the last years are already mastered exceptionally well by biological neuronal networks in our brain. On the other hand, many facets of typical ANN design and training algorithms seem biologically implausible, such as the non-local weight updates, discrete processing of time, and scalar communication between neurons. Recent evidence suggests that learning in biological systems is the result of the complex interplay of diverse error feedback signaling processes acting at multiple scales, ranging from single synapses to entire networks.
Lecture notesThe lecture slides will be provided as a PDF after each lecture.
Prerequisites / NoticeThis advanced level lecture requires some basic background in machine/deep learning. Thus, students are expected to have a basic mathematical foundation, including linear algebra, multivariate calculus, and probability. The course is not to be meant as an extended tutorial of how to train deep networks in PyTorch or Tensorflow, although these tools used.
The participation in the course is subject to the following conditions:

1) The number of participants is limited to 120 students (MSc and PhDs).

2) Students must have taken the exam in Deep Learning (263-3210-00L) or have acquired equivalent knowledge.
227-1037-00LIntroduction to Neuroinformatics Information W6 credits2V + 1U + 1AV. Mante, M. Cook, B. Grewe, G. Indiveri, D. Kiper, W. von der Behrens
AbstractThe course provides an introduction to the functional properties of neurons. Particularly the description of membrane electrical properties (action potentials, channels), neuronal anatomy, synaptic structures, and neuronal networks. Simple models of computation, learning, and behavior will be explained. Some artificial systems (robot, chip) are presented.
ObjectiveUnderstanding computation by neurons and neuronal circuits is one of the great challenges of science. Many different disciplines can contribute their tools and concepts to solving mysteries of neural computation. The goal of this introductory course is to introduce the monocultures of physics, maths, computer science, engineering, biology, psychology, and even philosophy and history, to discover the enchantments and challenges that we all face in taking on this major 21st century problem and how each discipline can contribute to discovering solutions.
ContentThis course considers the structure and function of biological neural networks at different levels. The function of neural networks lies fundamentally in their wiring and in the electro-chemical properties of nerve cell membranes. Thus, the biological structure of the nerve cell needs to be understood if biologically-realistic models are to be constructed. These simpler models are used to estimate the electrical current flow through dendritic cables and explore how a more complex geometry of neurons influences this current flow. The active properties of nerves are studied to understand both sensory transduction and the generation and transmission of nerve impulses along axons. The concept of local neuronal circuits arises in the context of the rules governing the formation of nerve connections and topographic projections within the nervous system. Communication between neurons in the network can be thought of as information flow across synapses, which can be modified by experience. We need an understanding of the action of inhibitory and excitatory neurotransmitters and neuromodulators, so that the dynamics and logic of synapses can be interpreted. Finally, simple neural architectures of feedforward and recurrent networks are discussed in the context of co-ordination, control, and integration of sensory and motor information.

Connections to computer science and artificial intelligence are discussed, but the main focus of the course is on establishing the biological basis of computations in neurons.
227-1051-00LSystems Neuroscience (University of Zurich) Information
No enrolment to this course at ETH Zurich. Book the corresponding module directly at UZH as an incoming student.
UZH Module Code: INI415

Mind the enrolment deadlines at UZH:
Link
W6 credits2V + 1UD. Kiper
AbstractThis course focuses on basic aspects of central nervous system physiology, including perception, motor control and cognitive functions.
ObjectiveTo understand the basic concepts underlying perceptual, motor and cognitive functions.
ContentMain emphasis sensory systems, with complements on motor and cognitive functions.
Lecture notesNone
Literature"The senses", ed. H. Barlow and J. Mollon, Cambridge.
"Principles of Neural Science", Kandel, Schwartz, and Jessel
Prerequisites / Noticenone
Neural Computation and Theoretical Neurosciences
NumberTitleTypeECTSHoursLecturers
227-1037-00LIntroduction to Neuroinformatics Information W6 credits2V + 1U + 1AV. Mante, M. Cook, B. Grewe, G. Indiveri, D. Kiper, W. von der Behrens
AbstractThe course provides an introduction to the functional properties of neurons. Particularly the description of membrane electrical properties (action potentials, channels), neuronal anatomy, synaptic structures, and neuronal networks. Simple models of computation, learning, and behavior will be explained. Some artificial systems (robot, chip) are presented.
ObjectiveUnderstanding computation by neurons and neuronal circuits is one of the great challenges of science. Many different disciplines can contribute their tools and concepts to solving mysteries of neural computation. The goal of this introductory course is to introduce the monocultures of physics, maths, computer science, engineering, biology, psychology, and even philosophy and history, to discover the enchantments and challenges that we all face in taking on this major 21st century problem and how each discipline can contribute to discovering solutions.
ContentThis course considers the structure and function of biological neural networks at different levels. The function of neural networks lies fundamentally in their wiring and in the electro-chemical properties of nerve cell membranes. Thus, the biological structure of the nerve cell needs to be understood if biologically-realistic models are to be constructed. These simpler models are used to estimate the electrical current flow through dendritic cables and explore how a more complex geometry of neurons influences this current flow. The active properties of nerves are studied to understand both sensory transduction and the generation and transmission of nerve impulses along axons. The concept of local neuronal circuits arises in the context of the rules governing the formation of nerve connections and topographic projections within the nervous system. Communication between neurons in the network can be thought of as information flow across synapses, which can be modified by experience. We need an understanding of the action of inhibitory and excitatory neurotransmitters and neuromodulators, so that the dynamics and logic of synapses can be interpreted. Finally, simple neural architectures of feedforward and recurrent networks are discussed in the context of co-ordination, control, and integration of sensory and motor information.

Connections to computer science and artificial intelligence are discussed, but the main focus of the course is on establishing the biological basis of computations in neurons.
227-0421-00LDeep Learning in Artificial and Biological Neuronal NetworksW4 credits3GB. Grewe
AbstractDeep-Learning (DL) a brain-inspired weak for of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. However, DL is far from being understood and investigating learning in biological networks might serve again as a compelling inspiration to think differently about state-of-the-art ANN training methods.
ObjectiveThe main goal of this lecture is to provide a comprehensive overview into the learning principles neuronal networks as well as to introduce a diverse skill set (e.g. simulating a spiking neuronal network) that is required to understand learning in large, hierarchical neuronal networks. To achieve this the lectures and exercises will merge ideas, concepts and methods from machine learning and neuroscience. These will include training basic ANNs, simulating spiking neuronal networks as well as being able to read and understand the main ideas presented in today’s neuroscience papers.
After this course students will be able to:
- read and understand the main ideas and methods that are presented in today’s neuroscience papers
- explain the basic ideas and concepts of plasticity in the mammalian brain
- implement alternative ANN learning algorithms to ‘error backpropagation’ in order to train deep neuronal networks.
- use a diverse set of ANN regularization methods to improve learning
- simulate spiking neuronal networks that learn simple (e.g. digit classification) tasks in a supervised manner.
ContentDeep-learning a brain-inspired weak form of AI allows training of large artificial neuronal networks (ANNs) that, like humans, can learn real-world tasks such as recognizing objects in images. The origins of deep hierarchical learning can be traced back to early neuroscience research by Hubel and Wiesel in the 1960s, who first described the neuronal processing of visual inputs in the mammalian neocortex. Similar to their neocortical counterparts ANNs seem to learn by interpreting and structuring the data provided by the external world. However, while on specific tasks such as playing (video) games deep ANNs outperform humans (Minh et al, 2015, Silver et al., 2018), ANNs are still not performing on par when it comes to recognizing actions in movie data and their ability to act as generalizable problem solvers is still far behind of what the human brain seems to achieve effortlessly. Moreover, biological neuronal networks can learn far more effectively with fewer training examples, they achieve a much higher performance in recognizing complex patterns in time series data (e.g. recognizing actions in movies), they dynamically adapt to new tasks without losing performance and they achieve unmatched performance to detect and integrate out-of-domain data examples (data they have not been trained with). In other words, many of the big challenges and unknowns that have emerged in the field of deep learning over the last years are already mastered exceptionally well by biological neuronal networks in our brain. On the other hand, many facets of typical ANN design and training algorithms seem biologically implausible, such as the non-local weight updates, discrete processing of time, and scalar communication between neurons. Recent evidence suggests that learning in biological systems is the result of the complex interplay of diverse error feedback signaling processes acting at multiple scales, ranging from single synapses to entire networks.
Lecture notesThe lecture slides will be provided as a PDF after each lecture.
Prerequisites / NoticeThis advanced level lecture requires some basic background in machine/deep learning. Thus, students are expected to have a basic mathematical foundation, including linear algebra, multivariate calculus, and probability. The course is not to be meant as an extended tutorial of how to train deep networks in PyTorch or Tensorflow, although these tools used.
The participation in the course is subject to the following conditions:

1) The number of participants is limited to 120 students (MSc and PhDs).

2) Students must have taken the exam in Deep Learning (263-3210-00L) or have acquired equivalent knowledge.
Neurotechnologies and Neuromorphic Engineering
NumberTitleTypeECTSHoursLecturers
227-1037-00LIntroduction to Neuroinformatics Information W6 credits2V + 1U + 1AV. Mante, M. Cook, B. Grewe, G. Indiveri, D. Kiper, W. von der Behrens
AbstractThe course provides an introduction to the functional properties of neurons. Particularly the description of membrane electrical properties (action potentials, channels), neuronal anatomy, synaptic structures, and neuronal networks. Simple models of computation, learning, and behavior will be explained. Some artificial systems (robot, chip) are presented.
ObjectiveUnderstanding computation by neurons and neuronal circuits is one of the great challenges of science. Many different disciplines can contribute their tools and concepts to solving mysteries of neural computation. The goal of this introductory course is to introduce the monocultures of physics, maths, computer science, engineering, biology, psychology, and even philosophy and history, to discover the enchantments and challenges that we all face in taking on this major 21st century problem and how each discipline can contribute to discovering solutions.
ContentThis course considers the structure and function of biological neural networks at different levels. The function of neural networks lies fundamentally in their wiring and in the electro-chemical properties of nerve cell membranes. Thus, the biological structure of the nerve cell needs to be understood if biologically-realistic models are to be constructed. These simpler models are used to estimate the electrical current flow through dendritic cables and explore how a more complex geometry of neurons influences this current flow. The active properties of nerves are studied to understand both sensory transduction and the generation and transmission of nerve impulses along axons. The concept of local neuronal circuits arises in the context of the rules governing the formation of nerve connections and topographic projections within the nervous system. Communication between neurons in the network can be thought of as information flow across synapses, which can be modified by experience. We need an understanding of the action of inhibitory and excitatory neurotransmitters and neuromodulators, so that the dynamics and logic of synapses can be interpreted. Finally, simple neural architectures of feedforward and recurrent networks are discussed in the context of co-ordination, control, and integration of sensory and motor information.

Connections to computer science and artificial intelligence are discussed, but the main focus of the course is on establishing the biological basis of computations in neurons.
227-1033-00LNeuromorphic Engineering I Restricted registration - show details
Registration in this class requires the permission of the instructors. Class size will be limited to available lab spots.
Preference is given to students that require this class as part of their major.

Information for UZH students:
Enrolment to this course unit only possible at ETH. No enrolment to module INI404 at UZH.
Please mind the ETH enrolment deadlines for UZH students: Link
W6 credits2V + 3UT. Delbrück, G. Indiveri, S.‑C. Liu
AbstractThis course covers analog circuits with emphasis on neuromorphic engineering: MOS transistors in CMOS technology, static circuits, dynamic circuits, systems (silicon neuron, silicon retina, silicon cochlea) with an introduction to multi-chip systems. The lectures are accompanied by weekly laboratory sessions.
ObjectiveUnderstanding of the characteristics of neuromorphic circuit elements.
ContentNeuromorphic circuits are inspired by the organizing principles of biological neural circuits. Their computational primitives are based on physics of semiconductor devices. Neuromorphic architectures often rely on collective computation in parallel networks. Adaptation, learning and memory are implemented locally within the individual computational elements. Transistors are often operated in weak inversion (below threshold), where they exhibit exponential I-V characteristics and low currents. These properties lead to the feasibility of high-density, low-power implementations of functions that are computationally intensive in other paradigms. Application domains of neuromorphic circuits include silicon retinas and cochleas for machine vision and audition, real-time emulations of networks of biological neurons, and the development of autonomous robotic systems. This course covers devices in CMOS technology (MOS transistor below and above threshold, floating-gate MOS transistor, phototransducers), static circuits (differential pair, current mirror, transconductance amplifiers, etc.), dynamic circuits (linear and nonlinear filters, adaptive circuits), systems (silicon neuron, silicon retina and cochlea) and an introduction to multi-chip systems that communicate events analogous to spikes. The lectures are accompanied by weekly laboratory sessions on the characterization of neuromorphic circuits, from elementary devices to systems.
LiteratureS.-C. Liu et al.: Analog VLSI Circuits and Principles; various publications.
Prerequisites / NoticeParticular: The course is highly recommended for those who intend to take the spring semester course 'Neuromorphic Engineering II', that teaches the conception, simulation, and physical layout of such circuits with chip design tools.

Prerequisites: Background in basics of semiconductor physics helpful, but not required.
227-0393-10LBioelectronics and Biosensors Information W6 credits2V + 2UJ. Vörös, M. F. Yanik
AbstractThe course introduces bioelectricity and the sensing concepts that enable obtaining information about neurons and their networks. The sources of electrical fields and currents in the context of biological systems are discussed. The fundamental concepts and challenges of measuring bioelectronic signals and the basic concepts to record optogenetically modified organisms are introduced.
ObjectiveDuring this course the students will:
- learn the basic concepts in bioelectronics including the sources of bioelectronic signals and the methods to measure them
- be able to solve typical problems in bioelectronics
- learn about the remaining challenges in this field
ContentLecture topics:

1. Introduction

Sources of bioelectronic signals
2. Membrane and Transport
3-4. Action potential and Hodgkin-Huxley

Measuring bioelectronic signals
5. Detection and Noise
6. Measuring currents in solutions, nanopore sensing and patch clamp pipettes
7. Measuring potentials in solution and core conductance model
8. Measuring electronic signals with wearable electronics, ECG, EEG
9. Measuring mechanical signals with bioelectronics

In vivo stimulation and recording
10. Functional electric stimulation
11. In vivo electrophysiology

Optical recording and control of neurons (optogenetics)
12. Measuring neurons optically, fundamentals of optical microscopy
13. Fluorescent probes and scanning microscopy, optogenetics, in vivo microscopy

14. Measuring biochemical signals
Lecture notesA detailed script is provided to each lecture including the exercises and their solutions.
LiteraturePlonsey and Barr, Bioelectricity: A Quantitative Approach (Third edition)
Prerequisites / NoticeThe course requires an open attitude to the interdisciplinary approach of bioelectronics.
In addition, it requires undergraduate entry-level familiarity with electric & magnetic fields/forces, resistors, capacitors, electric circuits, differential equations, calculus, probability calculus, Fourier transformation & frequency domain, lenses / light propagation / refractive index, pressure, diffusion AND basic knowledge of biology and chemistry (e.g. understanding the concepts of concentration, valence, reactants-products, etc.).
CompetenciesCompetencies
Subject-specific CompetenciesConcepts and Theoriesassessed
Techniques and Technologiesassessed
Method-specific CompetenciesAnalytical Competenciesassessed
Decision-makingfostered
Media and Digital Technologiesfostered
Problem-solvingassessed
Project Managementfostered
Social CompetenciesCommunicationfostered
Cooperation and Teamworkfostered
Customer Orientationfostered
Leadership and Responsibilityfostered
Self-presentation and Social Influence fostered
Sensitivity to Diversityfostered
Negotiationfostered
Personal CompetenciesAdaptability and Flexibilityfostered
Creative Thinkingassessed
Critical Thinkingassessed
Integrity and Work Ethicsfostered
Self-awareness and Self-reflection fostered
Self-direction and Self-management fostered
  •  Page  1  of  1