Skip to content

Colour and luminance gradients in depth and shape perception

Julie HarrisMarina BlojP. George LovellStéphane Clery (funded by EPSRC)

Can you tell the difference between real filmed footage of an event, and a computer-rendered counterpart? Despite tremendous progress in animation and graphics, the answer is most likely yes. We still have a long way to go in generating high quality realistic rendered worlds, that have a wide variety of applications, from gaming, through medical and industrial simulators, to architect-designed walk-throughs that give us a feel for how a new building could look. Improving the naturalness and realism of such virtual environments is a key challenge for those involved in computer graphics and rendering, particularly when there is a demand for interactive, real-time applications: we want to walk around in that simulated new building, not just view static photograph-like scenes. One of the reasons that our progress is slow, is that the extraordinary visual capabilities of most humans, though apparently effortless, hide a complex web of visual processing that is not yet fully understood. If we do not yet understand what enhances realism for the human visual system, it is not surprising that progress is slow in developing technology to improve the realism of simulations. The aim of this work will be to elucidate some of the basic perceptual processes that underlie how subtle changes in colour and lightness enhance the realism of our perception of a three-dimensional scene. This human behavioural research underpins the development of graphics and rendering technologies that will deliver enhanced realism for virtual environments.This is a colaborative project with Marina Bloj’s lab at the University of Bradford.