Centre for Applied Vision Research


I am a Vision Scientist working on 3D Vision and the author of the book The Perception and Cognition of Visual Space (Palgrave, 2017). I recently completed my PhD (subject to minor corrections) at the Centre for Applied Vision Research, City, University of London, supervised by Christopher Tyler and Simon Grant. Previously, I have been a Research Intern at Facebook Reality Labs, a Law Tutor at St Hilda’s College, Oxford University, and a Teaching Fellow in Philosophy at University College London.



In my book I use the distinction between perception and cognition to reject Bayesian Cue Integration, and advance a novel account of visual shape and visual scale.

Introduction (Preface & Chapter 1)

I outline the distinction between perception and cognition, and give an overview of how cognitive influences were rejected, and later reincorporated back into, 3D vision.

Visual Shape (Chapter 2 & Chapter 3)

Traditional theories argue that 3D vision integrates binocular disparity, perspective, and shading into a single coherent percept. I argue instead that visual shape is a two stage process, with perspective and shading only introduced at the level of cognition.

Visual Scale (Chapter 4)

I argue that visual cues to scale are ineffective (vergence, accommodation, motion parallax, vertical disparities), limited in application (ground plane), or merely cognitive (familiar size), leading me to suggest that visual scale is purely cognitive.

The Preface, Chapter 1, and Chapter 2, are available above. Please email me for Chapters 3 and 4. The book is available on, but I take no responsibility for this site.

Reviewed in Perception

Reviewed in Perception as “a valuable contribution to the scientific literature on visual perception” by Prof. Casper Erkelens (Emeritus Head of Physics at Utrecht)


My experimental work demonstrates that the visual system can’t extract size and distance from eye movements.

British Machine Vision Association

My talk at the British Machine Vision Association meeting ‘3D Worlds from 2D Images in Humans and Machines’ summarises my experimental work.

Distance Perception

In Linton, P. (2020). ‘Does Vision Extract Absolute Distance from Vergence?', Attention, Perception, & Psychophysics, 82, 3176-3195, I show that eye movements do not provide absolute distance information.

Size Perception

In Linton, P. (2020). ‘Eye Movements Do Not Affect Perceived Size’, BioRxiv (under review), I show that eye movements are not used for size constancy (perceiving objects as a constant physical size despite changes in distance). This work is explained in my Vision Sciences Society poster.

And accompanying presentation.

Do We See Scale?

In Linton, P. (2018). ‘Do We See Scale?', BioRxiv (under revision), I argue that if all of our visual cues to scale are ineffective (vergence, accommodation, motion parallax, vertical disparities), limited in application (ground plane), or merely cognitive (familiar size), we should be open to the idea that the visual system does not extract absolute size and distance from the environment, and that visual scale is entirely cognitive.


Vision science enables us to understand what information needs to be presented to the visual system to simulate reality.

Facebook Reality Labs

I was a Research Intern on DeepFocus, investigating the optical cues necessary for immersive virtual reality, working with Lei Xiao and Marina Zannoli as part of the Display Systems Research team led by Douglas Lanman.

Gaze-Contingent Rendering

In Linton, P. (2019). ‘Would Gaze-Contingent Rendering Improve Depth Perception in Virtual and Augmented Reality?', ArXiv, I was the first to explore how gaze-contingent rendering of binocular disparities could improve depth perception in virtual and augmented reality. Gordon Wetzstein’s group have since published experimental work.