What is it about?

Many animals have two eyes to see the world in depth. Because the eyes are apart - typically having the nose in between - they capture two separate images of the same scene. It is well-known that vision must use the slight positional differences between these images, called the disparity, to determine where and how far visual objects are from the animal and their three-dimensional structure. And it is this geometric principle that makes two flat photographic images, taken by two adjacent cameras, appear 3-dimensional in a Victorian viewfinder, evoking stereoscopic perception of the combined scene. However, it has not been understood how the eyes and brain do this feat - how does the visual system sample stereoscopic information from the world dynamically? We have now studied this open question using the fruit fly visual system as a general model system. The small fruit fly compound eyes and brain are more accessible to study than our large single-lens eyes and convoluted brains. However, because both these species face the 3-dimensional world and their nervous systems have evolved to perceive and behave in it efficiently, they likely use similar neural coding principles to see in stereo. For example, both systems have retinotopic organisation, meaning that the positioning of the eye and many brain neurons correspond to the x- and y-coordinates of the surrounding world. But what about the depth, the z-coordinate?

Featured Image

Why is it important?

Our work, published in PNAS, highlight that eyes (unlike conventional cameras) register relative light changes and that this process is active. In the fruit fly, individual photoreceptor cells - the light sensors corresponding to individual "pixels" of the scene - react photomechanically to light changes by generating an ultrafast counter-motion, a photoreceptor microsaccade. Each photoreceptor moves in a specific direction at its particular location inside the compound eye, transiently readjusting its own light input. Remarkably, this sophisticated structure-function organisation signals the fruit fly brain the missing z-coordinate. The photoreceptor microsaccades are mirror-symmetric in the left and right eyes, meaning that the same light change makes them move simultaneously in opposite directions. Therefore, during binocular viewing, the pixels in one eye move transiently with the world and in the other eye against it. Ultimately, these opposing microsaccades should cause small timing differences in the eye and the brain networks' electrical signals, rapidly and accurately informing the fly of the 3D world structure. What is more, behavioural tests and controls verified this coding strategy so efficient that the flies achieve super-resolution stereo vision. In other words, sampling the world with moving photoreceptors enables flies to see its details better than their compound eyes' optical resolution! We verified these coding principles by elaborate high-speed eye and brain activity imaging and behavioural experiments and combining the experiments with model simulations. So, effectively through closely interlinked multiple approaches, this study reveals that the fly nervous system uses time differences in left and right eye neural signals to represent object depth (z-coordinate) differences.


Similar neural (so-called phase-coding) principles likely underpin stereo vision in other animals, including humans. Finally, we propose how these principles - based on mirror-symmetrically moving sensors to sample 3-dimensional information from the world - could be applied to improve robotics and machine vision.

Mikko Juusola
University of Sheffield

Read the Original

This page is a summary of: Binocular mirror–symmetric microsaccadic sampling enables Drosophila hyperacute 3D vision, Proceedings of the National Academy of Sciences, March 2022, Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.2109717119.
You can read the full text:



The following have contributed to this page