Rethinking the Senses Workshop on Binding

Speaker: Multiple
Date: Friday 25 September 2015 10:00 am - 6:15 pm
Venue: Holden Room (Room 103), First floor, Senate House, Malet Street, WC1E 7HU
See all events at: Holden Room (Room 103)


There is a conceptual gulf between what we know of the information conveyed by neurons in sensory areas of the cortex and our subjective experience of a coherent, integrated perceptual world. How do the individual features of objects and events (encoded by early sensory neurons) become related to each other in sensory representations and segregated from their backgrounds? How can the separately processed aspects of a single stimulus (e.g. the angles of the outline and the colour of a red triangle) be represented in the brain in a way that supports the perception of that particular form or object? How does the brain compare and combine signals from different sensory systems (e.g. while looking at and listening to a speaking person) to provide a multimodal perceptual impression? These are all questions about ‘binding’ in perception. The binding problem extends beyond the encoding of individual, segregated objects and events to the entire integration of conscious experience. How can we account for the apparent coherence, within a single subjective framework, of our perceptions, thoughts, memories and plans for action?


Chair: Colin Blakemore


Casey O’Callaghan (Department of Philosophy, Washington University, St Louis MO, USA)

      Intermodal Binding & Awareness

This talk is about feature binding and its consequences – what it is and why it matters – and the relationship between binding processes and perceptual awareness, within and especially across sensory modalities. I’ll present the empirical and theoretical case that binding awareness involves a core, irreducibly multisensory variety of perceptual consciousness and defend it against objections.


Coffee break

Chair: Ophelia Deroy


Ian Phillips (St. Anne’s College, University of Oxford)

Where bound? Comments on O’Callaghan on Intermodal Binding and Awareness

I’ll offer a few brief critical remarks concerning O’Callaghan’s case for intermodal binding awareness. In particular, I’ll try to clarify where exactly the dispute between O’Callaghan and Spence & Bayne (2014) lies before considering whether the responses which O’Callaghan offers to their scepticism are fully convincing.


Glyn Humphreys (Department of Experimental Psychology, University of Oxford)

      The neuropsychology of visual binding in and out of attention

I will review neuropsychological evidence on visual binding and argue that several binding processes are revealed by the patterns of dissociation that occur. Deficits in visual binding have classically been associated with lesions of the posterior parietal cortex and linked to impairments in focal spatial attention, which would appear to be a necessary process in binding. Contrary to this view I will argue for separate bindings of (i) visual elements into coherent shapes, (ii) object shapes and surfaces. Moreover, patients with impaired spatial attention can be shown to bind relationships between visual elements. The work suggests that binding can take place in multiple cortical areas and outside as well as within an attentional spotlight.



Chair: Matt Nudds


Uta Noppeney (Department of Psychology and Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham)

See what you hear. Constructing a representation of the world (within and) across the senses

In our natural environment, our senses are constantly bombarded with many different signals. To make sense of this cacophony the brain should integrate sensory signals and information from a common source and segregate those from different sources. Bayesian Causal Inference has recently been proposed as a normative model of how the brain should arbitrate between information integration and segregation in the face of uncertainty about the world’s causal structure.

First, we will briefly explore how Bayesian Causal Inference can accommodate information integration within and across the senses. Next, we will discuss how multisensory integration across time and space may be implemented at the neural systems level. Combining Bayesian modelling and multivariate fMRI/EEG decoding our research demonstrates that the brain integrates sensory signals in line with Bayesian Causal Inference by simultaneously encoding multiple perceptual estimates along the audiovisual processing hierarchies. Critically, only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the world’s causal structure is taken into account and sensory signals are combined weighted by their sensory reliability and task-relevance as predicted by Bayesian Causal Inference. Moreover, these integration processes can be modulated by prior top-down expectations associated with prefrontal cortices.

Finally, we will focus on the relation between multisensory integration and perceptual awareness. To which extent can signals that we are not aware of interact across the senses? Surprisingly, our initial results suggest that signals that we are not aware of exert only small influence on the perceptual dynamics in other sensory modalities.


Andrew Parker (Department of Physiology, Anatomy & Genetics, University of Oxford)

      Some remarks on segmentation and binding

I shall present the case that image segmentation and perceptual binding are two different views of the same phenomenon. The starting point will be nature of segmentation and binding in stereoscopic vision, but I shall show that stereo is simply an entry point into a range of visual segmentation and grouping phenomena that characterize human visual perception. I shall argue that these diverse phenomena are most likely supported by a common neural architecture in the cerebral neocortex.


Coffee break

Chair: Barry Smith


Ophelia Deroy (Institute of Philosophy, School of Advanced Study, University of London)

Multisensory binding: ‘all-or-nothing’ or grades of integration? 

This talk will examine some possible differences between cases of unisensory binding (e.g. visual binding) and multisensory binding. Whereas in unisensory cases, there seems to be a switch in experience when two features are experienced as belonging to the same object or to two different objects (or locations), the distinction is much less easy to draw in the multisensory cases. The first part of the talk will review some principled arguments and actual cases which do not easily fit in the ‘all or nothing’ view of binding, and the second part will ask whether these cases invite us to revise the concept of binding, or to find another conceptual framework altogether.


Michael Morgan (Visual Perception Group, Max Planck Institute for Metabolism Research, Köln, Germany; and City University London)

      Binding features and objects across space

What binds objects across space so that we can carry out simple geometry on their relations?  How do we know when a line is co-linear with a dot? The problem will be discussed using the apparently uncomplicated examples of vernier and spatial interval acuity, and the Fraser ‘twisted cord’.  A common solution to spatial binding is to invent a ‘filter’ for every task but this rapidly becomes implausible if we have to invent a new filter for every geometrical task. The alternative is a neural engine that can make spatial relations explicit using ‘local sign’ but this too has its difficulties.  I shall also talk about the problem of binding ‘glimpses’ of objects across eye movements into a spatially coherent Gestalt.


General discussion, led by Matt Nudds (Department of Philosophy, University of Warwick)