Skip to content

Current Projects

Counter-shaded animal patterns: from photons to form

Julie Harris,  P. George Lovell, Olivier Penacchio, Innes Cuthill, Graeme Ruxton (funded by BBSRC)

Many species are counter-shaded: the dorsal surface is darker than the ventral. It has been proposed that counter-shading offers the animal camouflage. There are two potential accounts of its evolution: (i) counter-shading enables the animal to match its background: when viewed from above the dorsal surface is darker and matches the ground, when viewed from below its lighter ventral matches the sky. (ii) Counter-shading is self-shadow concealment. The image of a 3D uniform coloured object will exhibit a shading pattern, determined by its shape and the light-source direction. Visual shape-from-shading brain processes allow humans to perceive 3D shape even though the retinal image is 2D. A counter-shaded animal disrupts the pattern of shading coming from the light-shape interaction. In the extreme, the shading could cancel out, impeding detection of 3D shape and affecting visibility. Using calibrated cameras we will quantify counter-shading in animals and develop mathematical models to test whether the observed patterns match those expected for the hypotheses above.  In laboratory experiments, we will test how well optimal counter-shading patterns fool human visual systems, to probe the details of counter-shading processing. We will also test using bird visual systems, to examine the generality of success of the counter-shading strategies. Finally, field studies will provide the ultimate test: whether the optimal shading patterns from our simulations do improve concealment from birds in natural lighting environments.

Colour and luminance gradients in depth and shape perception

Julie HarrisMarina BlojP. George LovellStéphane Clery (funded by EPSRC)

Can you tell the difference between real filmed footage of an event, and a computer-rendered counterpart? Despite tremendous progress in animation and graphics, the answer is most likely yes. We still have a long way to go in generating high quality realistic rendered worlds, that have a wide variety of applications, from gaming, through medical and industrial simulators, to architect-designed walk-throughs that give us a feel for how a new building could look. Improving the naturalness and realism of such virtual environments is a key challenge for those involved in computer graphics and rendering, particularly when there is a demand for interactive, real-time applications: we want to walk around in that simulated new building, not just view static photograph-like scenes. One of the reasons that our progress is slow, is that the extraordinary visual capabilities of most humans, though apparently effortless, hide a complex web of visual processing that is not yet fully understood. If we do not yet understand what enhances realism for the human visual system, it is not surprising that progress is slow in developing technology to improve the realism of simulations. The aim of this work will be to elucidate some of the basic perceptual processes that underlie how subtle changes in colour and lightness enhance the realism of our perception of a three-dimensional scene. This human behavioural research underpins the development of graphics and rendering technologies that will deliver enhanced realism for virtual environments.This is a colaborative project with Marina Bloj’s lab at the University of Bradford.

Recent Projects

Monocular zones in 3-D scenes

Julie Harris, Danielle Smith, Manuel Spitschan, Katharina Zeiner (PhD funded by EPSRC Doctoral Training Grant)

When we look at the world with two eyes, there are regions, near the edges of object, that only one eye can see. We are exploring how the visual system perceives these regions, what inut they have to our representation of the world, and how depth can be perceived near or in those zones.

Binocular distance perception

Julie HarrisVit Drga, Katharina Zeiner (funded by EPSRC)

We are exploring how the small differences between the two eyes views of the world are used in the perception of distance in depth

Binocular motion in depth

Julie Harris, Cat GraftonHarold Nefs (funded by EPSRC)

In this study we study what sources of visual information are used in the perception of motion in dpeth, and in related judegements, such as the time to contact with moving objects. We compare different sources of binocular and monocular information and use ideal observer models to test how informtive the different course of information are.

Interaction of motion and eye movements during 3-D motion

Julie HarrisHarold Nefs (funded by University of St Andrews)

When the eyes move to follow an object the brain must take account of retinal information and eye movements. we are exploring how it does this when objects move in 3-D.