Aim and Scope
Underlying the new wave of Artificial Intelligence is the multifaceted approach to neural networks research that is popularly known as Deep Learning – a hierarchical approach to neural network processing. Over the past decade or so this research has resulted in achievements above all in the area of image processing classification and time series regression and classification. Dominant application areas have concerned face recognition, fMRI processing for medical diagnosis and natural language processing. More recently Deep Reinforcement Learning has emerged as a research area and has even been the focus of small enterprises such as DeepMind. Deep reinforcement learning has found particular application to autonomous systems, e.g. autonomous vehicles and robotics.
In the academic community the nature and extent of top-down influence on bottom-up processing (aka generative vs discriminative processing) in neural computation in the brain as well as the neural dynamics of brain and body interacting in the environment have been at the centre of contention in recent years. Predictive coding theories and computational modelling approaches (Rao & Ballard 1999, Friston 2010, Hohwy 2013, Clark 2015) place generative and inferential processing at the root of cognition. According to this perspective top-down activation accounts for perception and all that is ‘fed’ forward up the cortical/processing hierarchy are values that indicate the discrepancy from what is predicted and what is actually sensed (i.e. prediction error). Other theories (Rolls 2016, Schneegans et al. 2018) emphasize the neural dynamics of perceptual processing, e.g. that top-down attentional biasing may modulate bottom-up activity but through sub-threshold activation ‘biasing’, i.e. there is no ‘hallucinatory’ predictive activation. Still other theories, computational models and autonomous robotics implementations thereof, have focused on both top-down (O’Reilly 2017; Choi & Tani 2017) and bottom-up (Thelen et al. 2001, Schutte et al. 2003, Simmering et al. 2009) processing emphasizing the role of developmental and neural dynamic processes in hierarchical perception and cognition. Furthermore such neural dynamics are evaluated in the context of the embodied interaction of autonomous agents whereby cognitive capacities are considered emergent from the interactions of a number of parameters internal and external to the agent (e.g. Thelen et al. 2001).
Affective and emotional processing constitute another type of processing relevant to hierarchial and neural dynamic approaches. They have mostly been neglected by deep learning approaches (but see Barros et al. 2016, Churamani et al. 2018) but through recourse to reinforcement learning and homeostatic regulation may have application to deep reinforcement learning. Rolls (2016) has suggested that the representation of objects made robust to input variations in orientations and spatial displacements (an essential feature of deep hierarchical neural processing) is critical for learning their reward and punishment value. Such value dimensions, as well as those that encode for perceived omissions of these expected reinforcers, are constitutive of emotional states. Barrett (2017) (see also Barrett et al. 2016), on the other hand, has suggested that affective states should be conceptualized as low dimensional, multimodal summaries of ‘feelings’, which are viewed as hierarchically organized and contextual predictions of visceral change, in line with a predictive coding perspective. Affective hubs (e.g. amygdala) by this view are actually encoding uncertainty about predicted input and are not ‘emotional’ as such. Allostasis – a type of predictive homeostasis (Sterling 2004, Lowe et al. 2017) – is suggested to provide a means of regulating affective activity according to strict energy budgets (Barrett 2017) with top-down biasing of neurophysiological states predicted to meet current demand. Applied (robotics) and neurophysiological architectures that implement allostasis (a type of predictive neurophysiological regulation) have also recently emerged (Vernon et al. 2015, Concoran & Howhy 2017). Predictive coding architectures with interoceptive processing capabilities have also been put forward for utilization in the area of autonomous vehicles (Engström et al. 2018).
The aim of this symposium is to bring together world leading experts from areas of neural computational theoretical research that hold differing perspectives on the nature of (hierarchical) neural dynamics and predictive processing and their role in perception and cognition, as well as the role of development and affective (and affective regulatory) processes in such processing. The symposium will also aim at elucidating further how such cognitive theory can bring to bear on industry applied research where it concerns autonomous systems, above all in the areas of robotics and autonomous vehicles.
Barrett, L. F. (2017). How emotions are made: The secret life of the brain. Houghton Mifflin Harcourt.
Barrett, L. F., Quigley, K. S., & Hamilton, P. (2016). An active inference theory of allostasis and interoception in depression. Phil. Trans. R. Soc. B, 371(1708), 20160011.
Barros, P. & Wermter S., “Developing crossmodal expression recognition based on a deep neural model,” Adaptive Behavior, vol. 24, no. 5, pp. 373–396, 2016.
Clark, A. (2015). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press.
Corcoran, A., & Hohwy, J. (2018). Allostasis, interoception, and the free energy principle: Feeling our way forward. In M. Tsakiris & H. De Preester (Eds.), The interoceptive basis of the mind. Oxford: Oxford University Press.
Choi, M., & Tani, J. (2018). Predictive coding for dynamic visual processing: Development of functional hierarchy in a multiple spatiotemporal scales rnn model. Neural computation, 30(1), 237-270.
Churamani,N., Barros, P., Strahl, E. & Wermter, S.(2018). “Learning empathy-driven emotion expression using affective modulations,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE, 2018, pp. 1400–1407.
Engström, J., Bärgman, J., Nilsson, D., Seppelt, B., Markkula, G., Piccinini, G. B., & Victor, T. (2018). Great expectations: a predictive processing account of automobile driving. Theoretical issues in ergonomics science, 19(2), 156-194.
Friston, K. (2010). The free-energy principle: a unified brain theory?. Nature Reviews Neuroscience, 11(2), 127.
Hohwy, J. (2013). The predictive mind. Oxford University Press.
Lowe, R., Dodig-Crnkovic, G., & Almer, A. (2017). Predictive regulation in affective and adaptive behaviour: An allostatic-cybernetics perspective. In Advanced Research on Biologically Inspired Cognitive Architectures (pp. 149-176). IGI Global.
O'Reilly, R. C., Wyatte, D. R., & Rohrlich, J. (2017). Deep Predictive Learning: A Comprehensive Model of Three Visual Streams. arXiv preprint arXiv:1709.04654.
Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature neuroscience, 2(1), 79.
Rolls, E. T. (2012). Invariant visual object and face recognition: neural and computational bases, and a model, VisNet. Frontiers in Computational Neuroscience, 6, 35.
Rolls, E. T. (2016). Cerebral cortex: principles of operation. Oxford University Press.
Schneegans S, Spencer J, Schö̈ner G (2016) Integrating “what” and “where”: Visual working memory for objects in a scene In Scho ̈ner G, Spencer J, editors, Dynamic Thinking: A Primer on Dynamic Field Theory. Oxford University Press, New York.
Schutte, A. R., Spencer, J. P., & Schöner, G. (2003). Testing the dynamic field theory: Working memory for locations becomes more spatially precise over development. Child development, 74(5), 1393-1417.
Simmering, V. R., Schutte, A. R., & Spencer, J. P. (2008). Generalizing the dynamic field theory of spatial cognition across real and developmental time scales. Brain research, 1202, 68-86.
Sterling, P. (2004). Principles of allostasis: optimal design, predictive regulation, pathophysiology, and rational. Allostasis, homeostasis, and the costs of physiological adaptation, 17.
Thelen, E., Schöner, G., Scheier, C., & Smith, L. B. (2001). The dynamics of embodiment: A field theory of infant perseverative reaching. Behavioral and brain sciences, 24(1), 1-34.
Vernon, D., Lowe, R., Thill, S., & Ziemke, T. (2015). Embodied cognition and circular causality: on the role of constitutive autonomy in the reciprocal coupling of perception and action. Frontiers in psychology, 6, 1660.
Lindholmen Conference Centre, Lindholmspiren 5 SE-402 78, Gothenburg, Sweden
View Larger Map
Cognition & Philosophy Lab, Monash University, Melbourne, Australia.
Professor Hohwy conducts interdisciplinary research in the areas of philosophy, psychology, and neuroscience. In his Cognition & Philosophy Lab in the philosophy department he conducts experiments on the nature of perception and cognition. Prof. Hohwy collaborates with a number of neuroscientists and psychologists from Monash University and around the world. He works on general theories about brain function, which say the brain is primarily a sophisticated hypothesis tester.
Chair Theory of Cognitive Systems, Director Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany
Professor Schöner's research focuses on how embodied and situated nervous systems develop cognition. Prof. Schöner and collaborators have developed the theoretical framework of Dynamical Field Theory. In a set of close theory-experiment collaborations Schöner and collaborators validate the concept of the theory and systematically build an account of action, perception, and embodied cognition. Exemplary studies include multi-degree of freedom movements, learning of motor skills, perception of motion, working memory for action, space and visual features, sensory-motor decision making and the development of early cognition and motor behavior. In a second line of research they develop autonomous robots inspired by these same theoretical principles. The main emphasis is on service robotics, in which autonomous robots interact with human users.
Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London
Thomas conducts research at the Wellcome Trust Centre for Neuroimaging at UCL (incorporating the Leopold Muller Functional Imaging Laboratory and the Wellcome Department of Imaging Neuroscience), which is an interdisciplinary centre for neuroimaging excellence. He is part of Professor Karl Friston's lab developing advanced mathematical techniques that allow researchers to characterise brain organisation.
Developmental Dynamics Lab, School of Psychology, University of East Anglia, UK
John P. Spencer is a Professor of Psychology. He joined UEA in 2015. Prior to arriving in the UK, he was a Professor at the University of Iowa and served as the founding Director of the DeLTA Center. He is the recipient of the 2003 Early Research Contributions Award from the Society for Research in Child Development, and the 2006 Robert L. Fantz Memorial Award from the American Psychological Foundation. His research has been continuously funded by the National Science Foundation and the National Institutes of Health since 2001. His research focuses on the development of executive function including working memory, attention, and inhibitory control. He is also a pioneer in the use of dynamical systems and dynamic neural field models for understanding cognition and action.
Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology (OIST), Japan
Jun Tani received the B.S. degree in mechanical engineering from Waseda University, Tokyo, Japan in 1981, dual M.S. degree in electrical engineering and mechanical engineering from the University of Michigan, Ann Arbor, MI, USA in 1988, and the D.Eng. degree from Sophia University, Tokyo in 1995. He started his research career with Sony Laboratory, Tokyo, in 1990. He had been a PI of the Laboratory for Behavior and Dynamic Cognition, RIKEN Brain Science Institute, Saitama, Japan, for 12 years until 2012. He became a Full Professor with the Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon, South Korea, from 2012 to 2017. He is currently a Full Professor with the Okinawa Institute of Science and Technology, Okinawa, Japan. His current research interests include cognitive neuroscience, developmental psychology, phenomenology, complex adaptive systems, and robotics. He is an author of 'Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena.' published from Oxford Univ. Press in 2016.
Cognitive Development Lab, Indiana University, Bloomington, USA
Linda Smith is a Distinguished Professor and Chancellor's Professor of Psychological and Brain Sciences at Indiana University, Bloomington. She has, this year, been elected to the National Academy of Sciences (NAS) list of members as well as honoured by the Society of Experimental Psychology. Linda Smith's research seeks to understand the developmental process, and in particular the cascading interactions of perception, action, attention and language as children between the ages of 1 and 3 acquire their first language. In 1994, she and colleague Esther Thelen published the pioneering book, 'A Dynamic Systems Approach to the Development of Cognition and Action,' which proposed a new theory in the development of human cognition and action called dynamic systems theory. They argued that all the parts of a system work together to create some action, such as a baby successfully grasping a toy. The limbs, the muscles, and the baby's visual perception of a toy all unite to produce the reaching movement. This contrasts with the more classical neuromaturational theory of infant motor development, which holds that as the brain gets bigger and better, it instructs the body to do more complicated things.
International Research Center for Neurointelligence, University of Tokyo
Dr. Yukie Nagai has been investigating underlying neural mechanisms for social cognitive development by means of computational approach. She designs neural network models for robots to learn to acquire cognitive functions such as self-other cognition, estimation of others’ intention and emotion, altruism, and so on based on her theory of predictive learning. The simulator reproducing atypical perception in autism spectrum disorder (ASD), which has been developed by her group, greatly impacts on the society as it enables people with and without ASD to better understand potential causes for social difficulties. She is the research director of JST CREST Cognitive Mirroring since December 2016.
Department of Philosophy, Lund University, Lund, Sweden
Christian Balkenius is a professor of Cognitive Science at Lund University. His research concerns understanding the human brain and the mechanisms behind attention, memory and learning. He develops and test theories of human cognitions by reproducing human abilities in artificial systems, such as computers and robots. A special focus is on how the early development of infants can be reproduced in robots.
Northeastern University, Boston, USA
Research Professor of Psychology at Northeastern University, Karen's current research is guided by theories (i.e., the Theory of Constructed Emotion and the Self-Regulation Model of Illness and Health) that postulate a critical role for bodily sensations in constructing mental states and in health-related behaviors like physical symptom reports and health care use. In her basic science work, she uses psychophysiological (e.g., interoceptive tasks like heart beat detection and cardiorespiratory measures during affective inductions), behavioral and self-report methods to examine the role of the peripheral nervous system in the creation of mental states like affect or emotion. She is especially interested in affective reactivity as an important individual difference relevant for health. Her applied lines of work focus on the physiological, behavioral and cognitive effects of stressors and threats on health outcomes in those exposed to war or other major life events like terrorism threats. Recent work has also focused on using technology and training to teach skills for reducing symptoms and enhancing health functioning in Veterans.
Department of Mathematics and Applications, University of Minho, Portugal
Wolfram Erlhagen is Professor at the Department of Mathematics and Applications at the University of Minho in Portugal. His main research interests include the development, mathematical analysis and implementation in autonomous robots of neuro-inspired, computational models of cognitive functions. Over the last couple of years, he developed together with an interdisciplinary team a control architecture for natural human-robot interactions based on the theoretical framework of Dynamic Neural Fields. A key challenge is to endow the robot with a human-like action prediction capacity that allows the human-robot team to synchronize their decisions and actions in space and time. The DNF architecture implements action understanding as a sensory-based combination of predictions at hierarchically organized levels of movement, object and task context.
Department of Engineering, Aarhus University, Denmark
Nicolas is a robotics and artificial intelligence researcher with a passion for applied robotics and automation research. He is particularly interested in academic/industrial partnerships for the development of knowledge intensive R&D solutions for socially relevant challenges including welfare, agriculture, disaster relief and humanitarian demining. Nicolas advocates for a holistic development of robots, i.e. considering both hardware, software alike, which he considers is perfectly aligned with the concepts of embodied and developmental robotics. He also believes that robotic development should be human-centric fully embracing its impact on society and the environment.
Institute of Neuroinformatics, Zurich University and ETH Zurich.
Yulia Sandamirskaya is a group leader in the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. Her group “Neuromorphic Cognitive Robots” studies movement control, memory formation, and learning in embodied neuronal systems and implements neuronal architectures in neuromorphic devices, interfaced to robotic sensors and motors. She has a degree in Physics from the Belorussian State University in Minsk and PhD from the Institute for Neural Computation in Bochum, Germany. She is the chair of EUCOG — European network for Artificial Cognitive Systems and coordinator of the NEUROTECH project (neurotechai.eu) that organises a community around neuromorphic computing technology.
Chalmers University of Technology.
Jonas Bärgman is a researcher / associate professor, and leads the research group Safety Evaluation in the Division of Vehicle Safety, at the Mechanics and Maritime Department. Jonas' research has several components, all aimed at understanding how traffic safety is affected by driver behavior, vehicles (including intelligent systems/vehicle automation), and the environment (e.g., other road-users), in the pre-crash phase. His research ranges from quantifying driver comfort zone boundaries in everyday driving, via the understanding of why crashes occur (crash causation mechanisms) and to develop quantitative models of driver behavior in critical situation, to the development and application of the counterfactual (computer) simulation method to evaluate the safety impact of driver behaviors, driver support and automated systems, and the environment. A majority of the research use different forms of naturalistic driving data as a key component, but also test-track, on-road and simulator experiments are research tools Jonas uses.
Below is the schedule for the two-day symposium May 6th and May 7th.
|09h00||Tesla||Organizers||Introduction to the Symposium: May 6th||Predictive Coding and Neural Dynamics: Theory and Research on Development|
|09h20 May 6th||Tesla||Jakob Hohwy||Predictive coding, predictive processing, and the free energy principle||Predictive coding is a process theory within a broad 'predictive processing' framework, which can itself be unified under the free energy principle. I give an overview of the predictive processing framework, present the explanatory tools that the framework makes available, and discuss reasons for adopting the free energy principle. The broader perspective drastically expands the scope of phenomena that can be explained, including to homeostatic control, salience, and emotion. The wide explanatory scope also raises issues about the epistemic status of the overall predictive processing approach, concerning its falsifiability and distinctiveness.|
|10h00 May 6th||Tesla||Gregor Schöner||Neural dynamics rather than abstract computational principles help us understand the mind||Both in evolution and in development we see the sensorimotor grounding of the mind. Going back at least to Sherrington, the notion that behaviors are built from action-perception loops has permeated much of neuroscience and psychology. Such loops require stability so that small changes in input lead to small changes in action. A limitation of this cybernetic tradition in neuroscience is the lack of flexibility of such systems and their failure to abstract from sensor data. Flexibility requires change which stability resists. The key to flexibility is thus instability. The dynamical systems approach generalized the cybernetic line of thinking by invoking instabilities to understand embodied forms of decision making. It was tempting to mistake the abstractness of the level of description in some early dynamical systems accounts for abstraction achieved by the nervous system. In reality, the formalistic nature of some of the dynamical systems accounts ultimately left these somewhat sterile. Observing that the nervous system acts like a dynamical system falls short of understanding how that happens. The grounding of dynamical systems ideas in principles of neural processing has, in contrast, promoted fertile research question that led to insightful discoveries. In particular, it became clear that what neural states are stabilized is an empirical question. Not all neural variable are kept equally stable. Instead, certain levels of description are privileged in that they capture invariants of neural representations, that do not change as an organism moves, as the environment changes, or as other neural processes unfold. The neural activation states that form stable representations (or are ``predictive” in a different phrasing) reveal such invariances and help us understand how they are brought about by the functional architecture of brain and the associated coordinate transforms. I will use a number of concrete examples to illustrate neural dynamical principles that help us understand the emergence of cognition and the invariance of cognitive processes against sensorimotor transformations.|
|11h00 May 6th||Tesla||Thomas Parr||Message passing under active inference||Recent developments in theoretical neurobiology rest upon the idea that the brain uses an internal (generative) model of its environment to explain and predict its sensations. Active inference is a first-principles account of the process of optimising beliefs about world through perception, and optimising the world itself through movement. These inferential processes depend upon the passing of messages between populations of neurons, where the form of these messages depends upon the generative model employed. In this talk, we will explore the sorts of models used by the brain, and the message passing these entail. This leads to predictive coding in the setting of inference about the continuous variables required for proprioception and movement, and generalises to categorical inferences of the sort required for planning and decision making. The latter offers an opportunity to plan actions that resolve uncertainty, through selecting those sensory data that are most informative for inference and learning. This drive towards uncertainty resolution provides an intrinsic motivation for autonomous systems, whose disruption can have profound pathological consequences. We will conclude by considering the deep hierarchical composition of these models, and the way in which these architectures support the passage of messages from decisions into movements – and back again.|
|11h40 May 6th||Tesla||John Spencer||Bridging the lab and lounge to foster the early development of visual working memory||In this talk, I will present an overview of a line of work in which we have used neural dynamics to understand the mechanisms that underlie visual working memory (VWM) in early development. This work builds from initial efforts to understand VWM in the laboratory to understanding how VWM operates in real-world dyadic contexts. The end-product is a model that helps us conceptualize autonomous neural development in a way that bridges between the lab and the lounge.|
|13h20 May 6th||Tesla||Jun Tani||Accounts of the development of embodied cognition using predictive coding and active inference frameworks||The focus of my research has been to investigate how cognitive agents can acquire structural representations via iterative interactions with the world, exercising agency and learning from resultant perceptual experience. For this purpose, my group has investigated various models analogous to the frameworks of predictive coding and active inference. For the past two decades we have applied these frameworks to develop cognitive constructs for robots. This presentation will introduce our recent findings with some of the highlights from these studies.|
|14h00 May 6th||Tesla||Linda Smith||Learning: Babies, bodies and machines||Learning depends on both the learning mechanism and the training material. This talk considers the natural statistics of infant visual experience. These natural training sets for human visual object recognition challenge usual assumptions about how we think about learning. These visual experiences are created in real time by infants’ own behaviors. They change systematically as infants’ bodies and behavior changes. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order – with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. The skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines.|
|15h00 May 6th||Tesla||Yukie Nagai||Predictive coding account for social cognitive development and its disorders||My talk presents computational models based on predictive coding that account for social cognitive development in infants. The theory of predictive coding suggests that the human brain works as a predictive machine, that is, it tries to minimize prediction error. Inspired by the theory, we have been developing neural network models for robots to learn to acquire various cognitive functions such as self recognition, goal-directed action, estimation of others’ intention and emotion, and altruistic behaviors, etc. Our experiments demonstrated that the continuity and diversity of development can be reproduced by a unified framework of predictive coding. Behavioral characteristics of developmental disorder such as autism spectrum disorder were also observed by modifying model parameters. Our results suggest that predictive coding serves as a unified theory for human development.|
|15h40 May 6th||Tesla||Christian Balkenius||Memory and Attention in Goal-Directed Action: From Principles to a Robot Implementation||I will describe a set of principles for a cognitive architecture that allows it to learn to execute goal-directed actions. The basis for the architecture is the idea of attention as selection-for-action, which leads to a view of action execution as consisting of two stages, In the first stage, a target for the action is selected by the attention system. This selection depends on a number of factors including expectations of the environment and emotional evaluation. In the second stage, an action compatible with the selected target object is selected. This action can range from manipulation and locomotor behavior to internal simulation. A main component is a memory system that supports place-object bindings, episodic and semantic associations, working memory and spatial navigation. The principles have been implemented in robots that autonomously learn to explore and manipulate their environment.|
|17h00||End of Day 1||-|
|09h00||Organizers||Introduction to the Symposium: May 7th||Predictive Coding and Neural Dynamics: Research on Affect and Applications|
|09h20 May 7th||Tesla||Karen Quigley||Allostasis and Interoception: Implications for Designing a Robot||For much of the history of psychology, sensation, perception, action, emotion, and cognition were studied as if they were separate, biologically-defined faculties -- they are not. A prominent current neuroscientific perspective (and variants thereof) suggest that a brain runs an internal, predictive model or simulation of itself in the world. This model supports all functions achieved by a brain, and in this view, predictions constitute the internal model. Our lab has marshaled neuroanatomical evidence that predictions arise from visceromotor control regions in the brain to support anticipated action and other metabolically-costly functions such as learning. Collectively, these anticipatory regulatory processes are called allostasis. Allostasis is the major task of a brain, which utilizes 20% of the energetic budget of a human. A brain also requires a body, which is the effector by which the brain supports maintenance of its own energetic needs. The internal model also is modified by prediction error arising from unanticipated inputs from both exteroceptive (e.g., vision) and interoceptive (e.g., viscerosensory) sources. Interoceptive sensations provide critical information to the brain about the status of the body, enabling motor and visceromotor actions that can most efficiently support the brain’s energetic needs. I will consider how allostasis and interoception could inform the design of robots that model human behavior and mental life.|
|10h00 May 7th||Tesla||Wolfram Erlhagen||Off-line simulation inspires insight: A neurodynamics approach to efficient task learning.||Interactive learning-by-demonstration of sequential tasks has been a very active research in robotics in the last decade. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. I present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. I show the efficiency of the learning process in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions.|
|11h00 May 7th||Tesla||Nicolas Navarro-Guerrero||Towards Temporal Difference Learning with Multidimensional Value Representations||Temporal-difference learning is widely adopted and has been successfully applied to a number of problems as well as used to model animal learning. However, they are mainly based on neural pathways involved in reward-seeking behaviour since little is known about punishment-driven learning and less still about the combined effects of both types of reinforcement on learning. In this talk, I will discuss the implications for machine learning and robotics of this incomplete model of reward-based learning and possible strategies to deal with it.|
|11h40 May 7th||Tesla||Yulia Sandamirskaya||Event-based cognition: creating stability in the world of transients.||Biological neuronal networks seem to rely on event-based processes: individual neurons produce spikes — brief transient increases in membrane potential — and seem to communicate their activation levels with these spike-events. Spikes allow robust and efficient communication over long-range and noisy channels. This has inspired development of neuromorphic processors and sensors that emulate the dynamics of biological spiking neurons and synapses in hardware. Event-driven nature leads to prevalence of transients in neuronal dynamics, both on the sensory level, where sensory systems transiently respond to changes, and in processing. For feedforward neuronal architectures, e.g. in deep and convolutional neuronal networks, event-based nature of neuromorphic computing helps to save time (making computing fast) and energy (only consuming power when needed). Cognitive behavior and architectures, however, require existence of stabilised activation patterns: e.g., the neuronal representation of intention needs to be sustained long enough to initiate movement and observe its consequences, combining visual sensations into a scene representation requires sustained working memory of feature-location bindings between saccades or attention shifts, and learning requires keeping the presentations of stimulus, response, and their consequence co-active, as well as eligibility traces sustained long enough to contribute to reward-driven sequence learning. How can we enable this stability and sustained activation, required for neuronal implementation of cognitive functions, in an event-based transient computing substrate? I will show in this talk how conceptual and mathematical framework of attractor dynamics and dynamic neural fields can be transferred onto dynamics of spiking neural networks and how this allows us to build architectures in neuromorphic hardware that exhibit memory formation, coordinate transformations, adaptive feedback control, and learning in a closed behavioral loop. Apart from a number of technical applications, e.g. in development of neuromorphic controllers for cognitive robots, these architectures pave the way to a better understanding of the neuronal basis of cognition.|
|13h20 May 7th||Tesla||Jonas Bärgman||The predictive processing account for driving applications||A few years ago, at Chalmers division for Vehicle Safety, got our eyes on predictive processing as a way to approach the modelling of driver behavior (see Engström, Bärgman, Nilsson, et al. 2016. Great expectations: a predictive processing account of automobile driving). Rather than focusing on the theory and details of implementation, this talk is more an application and scenario focused presentation on how the predictive processing account may/can be considered in the driving domain, including examples and a discussion on some of the most urgent needs.|
|13h45 May 7th||Tesla||Robert Lowe||Predictive Coding in Reward-Based Affective Computation||Presentation + Summarizing Symposium comments|
|14h20||Tesla||Panel Discussion + closing comments from organizers||-|
|15h20||Newton||-||Workshop||For invited speakers and organizers only|
|17h00||End of Day 2||-|
University of Gothenburg
Docent in Cognitive Science at the Department of Applied IT in the division of Cognitive Science and Communication, Gothenburg University. Robert Lowe has a background in Psychology and Computer science and research interests in Affective Computing, Cognitive Robotics and Computational modelling.
University of Gothenburg
Head of division Cognitive Science and Communication, Department of Applied IT, Gothenburg University. Dr. Alexander Almér has a research background in philosophy of language and mind and is currently conducting interdisciplinary research in philosophy and cognitive science.