Banner Banner

Context Sensitivity Improves Human-Machine Visual Alignment

Frieda Born
Tom Neuhäuser
Lukas Muttenthaler
Brett D Roads
Bernhard Spitzer
Andrew Kyle Lampinen
Matt Jones
Klaus Robert Muller
Michael Curtis Mozer

March 02, 2026

Modern machine learning models typically represent inputs as fixed points in a high-dimensional embedding space. While this approach has been proven powerful for a wide range of downstream tasks, it fundamentally differs from the way humans process information. Because humans are constantly in need of adapting to their environment, they represent objects and their relationships in a highly context-sensitive manner. To address this gap, we propose a method for context-sensitive similarity computation from neural network embeddings which we apply to modeling a triplet odd-one-out task with an additional anchor image serving as simultaneous context. Modeling context enables us to achieve up to a 15% improvement in odd-one-out accuracy over a context-insensitive model. We find that this improvement is consistent across both original and ``human-aligned'' vision foundation models.