

Evaluating Depth Perception in Virtual and Physical Environments
As part of my PhD research at York University, I conducted a series of studies comparing depth perception in virtual and natural environments, revealing critical limitations in VR systems and contributing to the scientific understanding of 3D perception.
Project Overview
Client: York University (PhD dissertation)
My Role: Doctoral Student (Behavioural, Quantitative Perceptual Studies)
Timeline: September 2016 to August 2022
​
Skills
-
Experimental design & psychophysics
-
Quantitative & qualitative data analysis
-
Statistical modeling (regression, mixed models, Bayesian analysis, bootstrapping)
-
3D perception & human factors research
-
User testing in VR and physical environments
-
Cross-modal cue integration studies
-
Research communication (academic publications & presentations)
​
Tools
-
Unity (VR experiment development)
-
Python (stimulus programming, data collection)
-
R (advanced statistical analysis, psychophysical modeling)
-
Blender (3D object modeling for virtual stimuli and physical prototypes)
-
3D Printing (fabricating perceptually matched physical models)
-
Matlab (signal processing & Psychtoolbox)
Problem
Depth perception is central to how we interact with objects in the world, yet VR systems often produce distortions due to conflicts between natural visual cues. At the time of this research, little was known about how these distortions compared to depth perception in real environments with comparable depth cues available or whether interactions with virtual objects could help overcome these limitations.
Methods
-
Designed and developed three psychophysical studies using Unity and Python, across multiple display systems (head-mounted displays and stereoscopes).
-
Created realistic virtual objects in Blender and fabricated perceptually matched 3D-printed models for use in physical test environments.
-
Applied advanced data analyses, including psychophysical modeling, mixed models, logistic regression, Bayesian statistics, and bootstrapping in R.
-
Participants estimated distances and interacted with both virtual and physical objects under varying cue conditions (binocular, motion, interaction).

Figure 1: Sample of physical object
Key Findings
-
Depth distortions in VR: Participants consistently underestimated the distance and depth of virtual objects, while physical objects were judged accurately.
-
Cause of errors: Vergence-accommodation conflict in VR displays contributed significantly to depth underestimation.
-
Cue weighting: Participants relied more on binocular cues than motion cues when judging depth.
-
Interaction effects: Reaching toward virtual objects did not improve performance on depth-based tasks.
Impact
-
Contributed new evidence to the scientific literature on depth perception, emphasizing the importance of cue context, user experience, and ecological validity.
-
Highlighted critical limitations of VR systems that inform the design of head-mounted displays and training environments.
-
Advanced research methodology by integrating naturalistic 3D objects and scenes, expanding beyond typical screen-based perceptual experiments.
-
Publications:
What I learned
-
Gained extensive experience in conceptualizing, programming, and executing long-term experimental studies.
-
Strengthened technical expertise in Unity, Python, and R for perceptual research.
-
Developed skills in 3D modeling with Blender to build realistic virtual stimuli and physical prototypes.
-
Learned to approach ambiguous research questions with creative, rigorous, and innovative problem solving.