Paul Linton

I am a Presidential Scholar in Society and Neuroscience (2022-25) and a Fellow of the Italian Academy (2022-24) at Columbia University working on 3D vision, where I am affiliated with Niko Kriegeskorte's Visual Inference Lab at the Zuckerman Mind Brain Behavior Institute.

I am the author of the book The Perception and Cognition of Visual Space (Palgrave, 2017), the organiser of the Royal Society meeting New Approaches to 3D Vision, and the lead editor of the accompanying Philosophical Transactions of the Royal Society B volume.

My experimental work questions the longstanding theory that the visual system uses 'vergence' (the angular rotation of the eyes) to triangulate the size and distance of objects. My theoretical work develops new accounts of visual scale and visual shape to accommodate this fact.

Previously, I have been a Research Fellow at the Centre for Applied Vision Research, City, University of London, a Law Tutor at St Hilda's College, Oxford University, a Teaching Fellow in Philosophy at University College London, and Research Intern at Meta Reality Labs.

Email  /  CV  /  Google Scholar  /  LinkedIn  /  Twitter

profile photo
Book
clean-usnob

Linton, P. The Perception and Cognition of Visual Space (Palgrave Macmillan, 2017)

Single author book arguing that 3D vision is a two stage process that involves the extraction of depth from stereo vision (at the level of perception) before the integration of stereo vision with other depth cues (at the level of cognition).
Please email me for an electronic copy.

Reviewed in Perception by Casper Erkelens (former Head of Physics at Utrecht) as "a valuable contribution to the scientific literature on visual perception."

Featured on the Brains Blog, the leading online forum for cognitive science.
Post One / Post Two / Post Three / Post Four / Post Five

Meeting
clean-usnob

New Approaches to 3D Vision (Royal Society Scientific Meeting)

Royal Society meeting bringing together researchers in Computer Vision, Animal Navigation, and Human Vision, to better understand what is the most appropriate representation for 3D vision? Over 800 people participated, with speakers from DeepMind, Google Robotics, Meta (Facebook) Reality Labs, and Microsoft Research.

Talks and abstracts available on the Royal Society meeting website.

Edited Volume
clean-usnob

New Approaches to 3D Vision (Philosophical Transactions of the Royal Society B)

Traditionally, it's thought that 3D vision relies on recreating an accurate 3D model of the world. The new approaches to 3D vision explored in this volume challenge this assumption. Instead, they investigate the possibility that computer vision, animal navigation, and human vision can rely on partial or distorted models, or no model at all. This theme issue also highlights the implications for artificial intelligence, autonomous vehicles, human perception in virtual and augmented reality, and the treatment of visual disorders, all of which are explored by individual articles.

Download flyer for volume.

Papers
clean-usnob

Golan et al. (2023), ‘Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses’, Behavioral and Brain Sciences

Competitively selected response to Bowers et al. (2022), ‘Deep Problems with Neural Network Models of Human Vision’ in Behavioral and Brain Sciences, explaining how AI can contribute to our understanding of human vision

clean-usnob

Linton, P., Morgan, M., Read, J., Vishwanath, D., Creem-Regehr, S., Domini, F. (2023), ‘New Approaches to 3D Vision’, Philosophical Transactions of the Royal Society B

Introduction to our Royal Society volume arguing that Computer Vision, Animal Navigation, and Human Vision, are currently grappling with the same qustion, namely what is the most appropriate representation for 3D vision? Draws parallels between the solutions the different disciplines adopt, and questions for future research.

clean-usnob

Linton, P. (2023). 'Minimal theory of 3D vision: new approach to visual scale and visual shape', Philosophical Transactions of the Royal Society B

If my experimental work argues the visual system doesn’t know the rotation of the eyes (the ‘vergence’ signal), then my theoretical work argues that the visual system doesn’t even know the two eyes are in different locations. This leads me to develop a new theory of visual scale and visual shape, which is focused on resolving this tension, rather than trying to estimate the distance and shape of objects in the world.

clean-usnob

Linton, P. (2020). 'Does Vision Extract Absolute Distance From Vergence?', Attention, Perception, & Psychophysics, 82, 3176–3195

The closer an object is, the more the eyes have to rotate to fixate on it. This mechanism (known as 'vergence') is thought to provide critically important distance information. However, in this paper I test vergence as a distance cue divorced from other sources of distance information, and find it provides no useful distance information.

Featured on the Psychonomic Society's "All Things Cognition" podcast.

clean-usnob

Linton, P. (2021). 'Does Vergence Affect Perceived Size?', Vision, 5(3), 33

Vergence eye movements are also thought to play an essential role in size perception, in a process known as “size constancy”, which enables us to differentiate a small object up close from a large object far away, even though both cast the same retinal image. I develop a new paradigm to demonstrate that this effect does not exist once cognitive influences (subjective knowledge about changes in distance) have been controlled for.

Invited for a special issue on Size Constancy in Perception and Action edited by Mel Goodale FRS and Robert Whitwell.

clean-usnob

Linton, P. (2021). ‘V1 as an Egocentric Cognitive Map’, Neuroscience of Consciousness

In Linton (2020) and Linton (2021) I find that vergence has either a negligible or no effect on size and distance perception. The question is how to reconcile this finding with the processing of the vergence signal in the primary visual cortex (V1). I argue that we need to distinguish between perceptual and cognitive processing in V1, and draw an analogy with findings on non-visual processing in the mouse V1.

Special issue on Consciousness Science and Its Theories. Contributors include Stanislas Dehaene, Catherine Tallon-Baudry, Biyu Jade He, and Axel Cleeremans.

clean-usnob

Linton, P., (2021). ‘Conflicting shape percepts explained by perception cognition distinction’, PNAS, 118 (10) e2024195118

Debate in PNAS with Jorge Morales, Axel Bax, and Chaz Firestone on 3D shape processing in response to Morales et al. (2020). ‘Sustained representation of perspectival shape’, PNAS, 117(26), 14873-82.

Provided inspiration for new experiments in Morales et al. (2021). ‘Reply to Linton: Perspectival interference up close’, PNAS, 118 (28) e2025440118.

Virtual Reality
clean-usnob

PhD Internship on the DeepFocus Team at Facebook Reality Labs
Manager: Marina Zannoli, Collaborator: Lei Xiao, Team Lead: Douglas Lanman

Collaborated closely with researchers in deep learning, computer graphics, and optics, as part of a small interdisciplinary team, using artificial intelligence for real-time gaze-contingent defocus blur rendering.

Used principles of vision science to inform the development of neural network.

Ran user studies to evaluate neural network and make actionable recommendations.

clean-usnob

Linton, P. (2019). ‘Would Gaze-Contingent Rendering Improve Depth Perception in Virtual and Augmented Reality?’, ArXiv, 1905.10366

In contemporary virtual reality displays, the scene is static despite eye movements by the observer. This can cause a problem because the centre of projection of the eye (the nodal point) and the centre of rotation of the eye are off-set relative to one another.

I was the first to propose updating the camera frustum in virtual reality with eye movements to control for these distortions.

Recorded Talks


Public Events



Thanks to Jon Barron for the html template