top of page
Gradients

Evaluating Depth Perception in Virtual and Physical Environments

As part of my PhD research at York University, I conducted a series of studies comparing depth perception in virtual and natural environments, revealing critical limitations in VR systems and contributing to the scientific understanding of 3D perception.

Project Overview

Client: York University (PhD dissertation)

My Role: Doctoral Student (Behavioural, Quantitative Perceptual Studies)

Timeline: September 2016 to August 2022

​

Skills

  • Experimental design & psychophysics

  • Quantitative & qualitative data analysis

  • Statistical modeling (regression, mixed models, Bayesian analysis, bootstrapping)

  • 3D perception & human factors research

  • User testing in VR and physical environments

  • Cross-modal cue integration studies

  • Research communication (academic publications & presentations)

​

Tools

  • Unity (VR experiment development)

  • Python (stimulus programming, data collection)

  • R (advanced statistical analysis, psychophysical modeling)

  • Blender (3D object modeling for virtual stimuli and physical prototypes)

  • 3D Printing (fabricating perceptually matched physical models)

  • Matlab (signal processing & Psychtoolbox)

Problem

Depth perception is central to how we interact with objects in the world, yet VR systems often produce distortions due to conflicts between natural visual cues. At the time of this research, little was known about how these distortions compared to depth perception in real environments with comparable depth cues available or whether interactions with virtual objects could help overcome these limitations.

Methods

  • Designed and developed three psychophysical studies using Unity and Python, across multiple display systems (head-mounted displays and stereoscopes).

  • Created realistic virtual objects in Blender and fabricated perceptually matched 3D-printed models for use in physical test environments.

  • Applied advanced data analyses, including psychophysical modeling, mixed models, logistic regression, Bayesian statistics, and bootstrapping in R.

  • Participants estimated distances and interacted with both virtual and physical objects under varying cue conditions (binocular, motion, interaction).

PSR_sample_image2.0.jpg

Figure 1: Sample of physical object

Key Findings

  • Depth distortions in VR: Participants consistently underestimated the distance and depth of virtual objects, while physical objects were judged accurately.

  • Cause of errors: Vergence-accommodation conflict in VR displays contributed significantly to depth underestimation.

  • Cue weighting: Participants relied more on binocular cues than motion cues when judging depth.

  • Interaction effects: Reaching toward virtual objects did not improve performance on depth-based tasks.

Impact

What I learned

  • Gained extensive experience in conceptualizing, programming, and executing long-term experimental studies.

  • Strengthened technical expertise in Unity, Python, and R for perceptual research.

  • Developed skills in 3D modeling with Blender to build realistic virtual stimuli and physical prototypes.

  • Learned to approach ambiguous research questions with creative, rigorous, and innovative problem solving.

Contact
Information

Department of Psychology and Centre for Vision Research

York University

4700 Keele Street
Toronto, ON M3J 1P3

  • LinkedIn
  • google_logo

Thanks for submitting!

©2023 by Brittney Hartle. Powered and secured by Wix

bottom of page