ISDA 2014

Plenary Speakers

Machine learning and brain science
Kenji Doya, Neural Computation Unit, Okinawa Institute of Science and Technology (OIST)

[Abstract] Machine learning research has evolved in interaction with brain science research in a variety of ways. For example, the discovery of feature detectors in the visual system motivated the design of the Perceptrons, and in return, the theory of unsupervised learning gave account of how such feature detectors can emerge by capturing the statistics of visual environment. The hierarchical organization of the visual cortex motivated the design of deep convolutional neural networks, which now achieves superb performance in machine vision.

In this lecture, I will review such dynamic interactions between machine learning and brain science and report our own brain science research motivated by reinforcement learning theory. Topics include action value coding in the basal ganglia and the regulation of temporal discounting by serotonin.

[Biography] KENJI DOYA took BS in 1984, MS in 1986, and Ph.D. in 1991 at U. Tokyo. He became a research associate at U. Tokyo in 1986, U. C. San Diego in 1991, and Salk Institute in 1993. He joined ATR in 1994 and became the head of Computational Neurobiology Department, ATR Computational Neuroscience Laboratories in 2003. In 2004, he was appointed as the principal investigator of Neural Computation Unit, Okinawa Institute of Science and Technology (OIST) and started Okinawa Computational Neuroscience Course (OCNC) as the chief organizer. As OIST re-established itself as a graduate university in 2011, he became a professor and the vice provost for research. He serves as the co-editor in chief of Neural Networks from 2008. He is interested in understanding the functions of basal ganglia and neuromodulators based on the theory of reinforcement learning. Contact: doya@oist.jp, 1919-1 Tancha, Onna, Okinawa 904-0495, Japan.

Modelling basic perceptual functions
Andrew P. Paplinski, Monash University, Australia

[Abstract] Perception describes the way in which our brain interprets sensory information and creates the representation of the environment.
We present a system that can integrate visual and auditory information and bind it to the internal mental concepts.

The basic module of the system, loosely identified with a cortical area of the brain, consists of stochastically fixed number of neuronal units per perceptual object, and maps the higher dimensionality afferent signals into a lower dimensionality “neuronal code”.

A typical perceptual system consist of three hierarchical layers of such modules, namely, the sensory layer, the unimodal association layer and the top, multimodal association module holding the representation of the collected knowledge.

We will demonstrate three versions of such a system that:

  • binds concepts to spoken names,
  • binds written words to mental objects,
  • integrates visual and auditory stimuli.

Finally, if time permits, we will demonstrate how the knowledge can be transferred between such perceptual systems.

[Biography] Andrew P.  Paplinski received his M.Eng. and Ph.D. degrees from the Faculty of Electronic Engineering, Warsaw University of Technology, Poland.
After moving to Australia, Andrew worked at the Department of Computer Science, Australian National University in Canberra, the Department of Electrical and Electronic Engineering, University of Adelaide, and finally at School of IT at Monash University where he is  Associate Professor.

Since 2012 Andrew has been teaching in the newly formed Southeast University-Monash University Joint Graduate School in Suzhou, China.

Andrew visited and collaborated with Kings College London, University of Oregon, University of New Mexico, University of Illinois at Urbana-Champaign, Nanyang Technological University, Technical University of Denmark and Lulea Technological University, Sweden.

His research activities evolved from designing computer hardware, through theory of control systems, signal and image processing, ultrasonic imaging into current involvement in computer vision and computational neuroscience and intelligence.