The Culham Lab is interested in how vision is used for perception and to guide actions in human observers. We use several techniques from cognitive neuroscience to understand human vision. Functional magnetic resonance imaging (fMRI) is a safe way to study which regions of the brain are active when people view different visual stimuli or use visual information to perform different tasks. Behavioral testing tells us about the visual experiences and limits of normal subjects. Neuropsychological testing tells us how these abilities are compromised in patients with damage to different areas of the brain.
Our lab has developed unique techniques for bringing the "real world" into the constrained environment of the fMRI scanner. We are particularly interested in tasks such as grasping, reaching, tool use, and 3D object recogntion. In our fMRI studies of reaching and grasping, rather than presenting unrealistic, ungraspable objects on a flat 2D screen or having subjects pretend to do actions, we have subjects performing real actions on real objects. Moreover, we have developed the apparatus (including the "grasparatus"), paradigms, accessories, and preprocessing algorithms to optimize the quality of the data.
Our research has focused on the processing within and interactions between several brain regions, including the following:
the Anterior IntraParietal Sulcus (aIPS), which is active during grasping
Superior Parieto-Occipital Cortex (SPOC), which is active during reaching and shows a preference for near space
Lateral Occipital Cortex (LOC), which is active during perceptual processing of objects.
Recent and current research questions in our lab include the following:
Which brain areas are involved in various aspects of grasping and reaching tasks? How do these areas in the human brain relate to areas of the macaque monkey brain (based on reports from other labs)? What types of information do these areas encode about the shape, size, and orientation of objects? What types of information do these areas encode about the actions performed upon the objects?
How do vision-for-perception and vision-for-action rely on different brain areas? How are these interactions affected when the nature of the task is changed, for example by asking subjects to perform pretend (pantomimed) or delayed actions?
How is behavior, brain activation, and brain connectivity affected by focal brain lesions in interesting single case studies, including patients who cannot recognize objects but can act toward them?
Do brain areas which encode a particular body part (effector) respond preferentially to the part of space in which that body part typically acts? Do these responses change when a tool is used or when there are arm restraints or obstacles?
Given that tools require both actions and knowledge of objects and function, how do brain areas involved in tool use relate to areas involved in actions and object perception?
How do hand actions to bodily targets (e.g., the opposite hand, an object held in the opposite hand, the mouth) differ from hand actions toward external objects?
How do brain areas involved in actions process objects? Do they distinguish between 3D and 2D objects?