Research Interests

The Culham Lab is interested in how vision is used for perception and to guide actions in human observers. We use several techniques from cognitive neuroscience to understand human vision. Functional magnetic resonance imaging (fMRI) is a safe way to study which regions of the brain are active when people view different visual stimuli or use visual information to perform different tasks. Behavioral testing tells us about the visual experiences and limits of normal subjects. Neuropsychological testing tells us how these abilities are compromised in patients with damage to different areas of the brain.

Our lab has developed unique techniques for bringing the "real world" into the constrained environment of the fMRI scanner. We are particularly interested in tasks such as grasping, reaching, tool use, and 3D object recognition. In our fMRI studies of reaching and grasping, rather than presenting unrealistic, ungraspable objects on a flat 2D screen or having subjects pretend to do actions, we have subjects performing real actions on real objects. Moreover, we have developed the apparatus (including the "grasparatus" and the DROID), paradigms, accessories, and preprocessing algorithms to optimize the quality of the data.

The newest direction in the lab is the use of virtual and mixed reality to understand human vision under realistic scenarios that provide greater experimental control and convenience than true reality.

Recent and current research questions in our lab include the following:

  • How does the processing of real-world stimuli and actions differ from that for commonly used proxies such as pictures and pantomimed, imagined or observed actions?

  • Which aspects of reality and virtual/augmented reality are critical for naturalistic behavior and brain responses? Can the added engagement in VR and gaming environments tap into a diverse, robust range of cognitive functions?

  • What can we learn about human vision from studying more realistic and complex stimuli and actions than typical?  For example, how does the brain represent real-world size and distance?

  • Which brain areas are involved in various aspects of grasping and reaching tasks? How do these areas in the human brain relate to areas of the macaque monkey brain (based on reports from other labs)? What types of visual and motor information do these areas encode?

  • How do vision-for-perception and vision-for-action rely on different brain areas? How are these interactions affected when the nature of the task is changed, for example by asking subjects to perform pretend (pantomimed) or delayed actions?

  • How is behavior, brain activation, and brain connectivity affected by focal brain lesions in interesting single case studies, including patients who cannot recognize objects but can act toward them?

  • Do brain areas that encode a particular body part (effector) respond preferentially to the part of space in which that body part typically acts? Do these responses change when a tool is used or when there are arm restraints or obstacles?

  • Given that tools require both actions and knowledge of objects and function, how do brain areas involved in tool use relate to areas involved in actions and object perception?

  • How do hand actions to bodily targets (e.g., the opposite hand, an object held in the opposite hand, the mouth) differ from hand actions toward external objects?