Psychological invariants of recognition memory
Our choices often depend on the context that choice options impose on each other. It is also true for memory-driven choices (which of these items do I remember and which are new?). For example, the probability to correctly recognize an actual memory target among various never memorized new items (foils) depends on how many foils there are and how similar they are to the memory contents. In this line of research, I use large-scale online memory testing in conjunction with advanced signal-detection theory (SDT) modeling to reveal representations that stay invariant even when recognition context changes.
Neural dynamics of memory decisions
I use event-related potentials (ERP) recorded during study and memory tests as time-resolved measures of memory and decision processes. For example, my most recent work in collaboration with Ed Vogel aims to identify electrophysiological markers of memory-driven decisions when people are making a choice between several simultaneously presented alternatives.
Object and feature representation in memory
How does visual memory store the information about meaningful real-world objects and how do we recover this information at the time of recognition? Real-world objects are often thought of as natural representational units of our everyday perception and memory. It is probably impossible to introspect otherwise: For example, thinking of an open backpack or a coffee mug full of coffee, how can one imagine “openness” or “coffee-fullness” separate from their things? However, my work shows that our memory represents objects as at least partially independent collections of “features” and the link between these features is often very elusive. In particular, our work shows that people frequently “swap” features between different real-world objects (e.g., mistakenly remembering that the incorrect mug was full of coffee, when it was actually empty), despite having excellent memory for which exemplars (e.g., which mugs) and which states they saw (e.g., whether they saw a full mug or not).
Computational modeling of ensemble perception
We have a remarkable ability to accurately judge summary statistics of large collections of briefly presented items (for example, the average or variability of orientation, size, color, speed, or even emotional expression in a crowd). Together with my collaborators, I employ insights from basic vision neuroscience to build a plausible computational model explaining the rich phenomenology of ensemble perception. The core mechanism of the model is spatial pooling of local visual signals by populations of feature-selective neurons with large receptive fields (akin to a convolutional neural networks). We carefully modeled several existing ensemble datasets from different independent labs and found that our model reproduces the quantitative patterns (including some unusual non-monotonic effects and systematic biases in visual statistical judgments) with the same set minimal set of assumptions.
Ensemble-based visual categorization of multiple objects
If we look at a lemon tree where fruits are spatially intermixed with foliage, we can easily get an impression of two object categories (lemons and leaves) without the need to process individual patches of the bush. My work shows how internal representations of a feature distribution across a visual scene support an instant impression of seeing two types of objects. Overall, this work suggests that ensemble representations can be used for elaborate visual processing (categorization) at a relatively early stage of perception.