Rewarding context accelerates implicit guidance in visual search

Yuan Chi Tseng, Alejandro Lleras

研究成果: Article同行評審

25 引文 斯高帕斯(Scopus)


It is well known that observers can implicitly learn the spatial context of complex visual searches, such that future searches through repeated contexts are completed faster than those through novel contexts, even though observers remain at chance at discriminating repeated from new contexts. This contextual-cueing effect arises quickly (within less than five exposures) and asymptotes within 30 exposures to repeated contexts. In spite of being a robust effect (its magnitude is over 100 ms at the asymptotic level), the effect is implicit: Participants are usually at chance at discriminating old from new contexts at the end of an experiment, in spite of having seen each repeated context more than 30 times throughout a 50-min experiment. Here, we demonstrate that the speed at which the contextual-cueing effect arises can be modulated by external rewards associated with the search contexts (not with the performance itself). Following each visual search trial (and irrespective of a participant's search speed on the trial), we provided a reward, a penalty, or no feedback to the participant. Crucially, the type of feedback obtained was associated with the specific contexts, such that some repeated contexts were always associated with reward, and others were always associated with penalties. Implicit learning occurred fastest for contexts associated with positive feedback, though penalizing contexts also showed a learning benefit. Consistent feedback also produced faster learning than did variable feedback, though unexpected penalties produced the largest immediate effects on search performance.

頁(從 - 到)287-298
期刊Attention, Perception, and Psychophysics
出版狀態Published - 2013 五月 6

All Science Journal Classification (ASJC) codes

  • 語言與語言學
  • 實驗與認知心理學
  • 感覺系統
  • 語言和語言學


深入研究「Rewarding context accelerates implicit guidance in visual search」主題。共同形成了獨特的指紋。