Olivier has had an abstract accept for the PIRE Multi-sensory integration workshop in July.
Multisensory integration in animal warning signals. Olivier Penacchio, Julie M. Harris, School of Psychology and Neuroscience, University of St Andrews
Many species in the animal kingdom use camouflage to avoid predation. By contrast, aposematic species use a strategy that makes them easier for would-be predators to spot: they adopt distinctive signals, called warning signals, such as conspicuous colour patterns, sounds, odours or distasteful secretions, to advertise they are defended by being toxic or more generally unprofitable. Common examples of such displays for the visual modality are the yellow-black stripes of wasps and the red-black pattern of ladybirds. Warning signals offer an evolutionarily stable strategy: the increased likelihood of being detected by a predator is outstripped by the enhancement of unlearnt and learnt avoidance that warning displays provoke.
Intriguingly, multimodal warning signals are frequent, with prey relying on combinations of distinctive colour patterns, sounds, odours or secretions to dissuade predation (Rowe & Halpin, 2013). While the efficacy of multimodal warning signals over their unimodal components has been demonstrated empirically, there is a lack of functional understanding of how they impact the brain of the receiver of the signals.
In this work, we explore computationally the potential advantages of multimodal warning signals. We define a model of multisensory integration consisting of two layers of unimodal sensory units and a layer of multisensory units that integrate the unimodal sensory inputs. The layer of multisensory units is reminiscent of the superior colliculus in mammals, or the tectum in birds, one of the main predators of aposematic species. The model includes contextual modulation through excitatory and inhibitory lateral interactions between units within the same layer, similar to that defined in (Ursino, Cuppini, Magosso, Serino, & di Pellegrino, 2009). The unimodal layers are set up to emulate the visual and auditory modalities. As empirically measured in both mammals and birds, the receptive fields of auditory units are larger than that of the visual units (Kadunce, Vaughan, Wallace, & Stein, 2001; Knudsen, 1982). The model reproduces several properties of multisensory integration, such as multisensory enhancement and contrast, within and cross-modality suppression, and the redundant signal effect.
We show that such a generic model of multisensory integration suggests increased efficacy of warning signals, a feature envisioned by biologists as a potentially important selective pressure for the evolution of warning signals (Rowe & Halpin, 2013). In particular, the model is in agreement with the efficacy back-up hypothesis: in noisy environments, a spatial location is much more salient if an auditory signal, even weak, supplements a visual signal, compared to when the activity is driven by the visual signal alone.