Research Interests

The Culham Lab is interested in how vision is used for perception and to guide actions in human observers.

A major theme of the lab is Immersive Neuroscience: bringing cognitive neuroscience research closer to the real world. Our lab has developed unique techniques for bringing the "real world" into the constrained environment of the fMRI scanner. We have successfully studied actions — grasping, reaching, tool use — and visual perception using real actions and real objects.

We use several techniques from cognitive neuroscience to understand human vision. Functional magnetic resonance imaging (fMRI) and functional near-infrared spectroscopy (fNIRS) are safe ways to study which regions of the brain are active when people view different visual stimuli or use visual information to perform different tasks. Behavioral testing tells us about the visual experiences and limits of normal research participants. Neuropsychological testing tells us how these abilities are compromised in patients with damage to different areas of the brain.

Immersive neuroscience uses immersive stimuli like 3D objects, Immersive modalities like VR/AR, and immersive tasks like video-game play to provide high ecological validity with more experimental control and convenience than the real world offers (Adapted from Snow & Culham, 2021, Trends in cognitive sciences)

 

Word Cloud from three most recent grants (CIHR, NSERC, NFRF) generated using wordclouds.com

We are now using immersivesimulations to understand human vision under naturalistic scenarios that provide greater experimental control and convenience than true reality. Our current work focuses on several approaches:

  • Real (tangible) stimuli

  • Realistic 3D vision (binocular disparity, motion parallax, real-world viewing geometry)

  • Virtual/augmented reality

  • Video games

Word cloud from Jody culham’s google scholar profile (2024) generated using scholargoogler.com

Recent and current research questions in our lab include the following:

  • How does the processing of real-world stimuli and actions differ from that for commonly used proxies such as pictures and pantomimed, imagined or observed actions?

  • How do 3D cues like stereopsis and motion parallax affect processing of natural visual stimuli like faces, objects, and scenes?

  • Which aspects of reality and virtual/augmented reality are critical for naturalistic behavior and brain responses? Can the added engagement in VR and gaming environments tap into a diverse, robust range of cognitive functions?

  • What can we learn about human vision from studying more realistic and complex stimuli and actions than typical?  For example, how does the brain represent real-world size and distance?

  • Using functional near-infrared spectroscopy (fNIRS), what can we learn about the neural basis of key actions in the human repertoire (e.g., grasping and manipulation, feeding, tool use, defensive actions, locomotion)?

  • Which brain areas are involved in various aspects of grasping and reaching tasks? How do these areas in the human brain relate to known areas in other primate species? What types of visual and motor information do these areas encode? Can signals from these areas be used to develop brain-computer interfaces?

  • How do vision-for-perception and vision-for-action rely on different brain areas? How are these interactions affected when the nature of the task is changed, for example by asking subjects to perform pretend (pantomimed) or delayed actions?

  • How is behavior, brain activation, and brain connectivity affected by focal brain lesions in interesting single case studies, including patients who cannot recognize objects but can act toward them?