Attentional and Nonattentional Processes in Human Vision.
Ronald A. Rensink, Departments of Computer Science and Psychology, University of British Columbia, Vancouver BC, Canada.

SIGGRAPH - Notes for Course 75: Seeing, Hearing, and Touching: Putting It All Together. 2004. [SIGGRAPH 2004; Los Angeles, CA, USA.]

Outline of the Course

How to design interactive media and applications for emerging computer graphics display technologies. Innovations in large-screen displays enable us to present dynamic, high-resolution graphical scenes, but require designers to predict how those scenes will be parsed by users' visual systems. Information and data visualization approaches are increasing in importance, but their effectiveness depends on their ability to support visual cognition. Haptic (touch) techniques offer tangibility, but they must be designed for spatial and temporal touch sensitivity as an independent information channel and as support for user interaction (control intimacy). Bottlenecks in sound perception provide their own characteristic design constraints, and producers must determine whether auditory events are perceived as independent channels (for example, system status, speech, music, and background) or an integrated part of a multichannel event (for example, a collision).

The course is divided into five modules: Seeing, Hearing, Touching, Sensory Integration, and Applications/Design. Each module covers relevant aspects of perceptual theory and its application to design and testing of interaction in step-by-step design case studies. Topics include the cognitive science of intersensory processing (vision, hearing, haptics) in scene understanding and interaction, including attention, change blindness, haptics, ventriloquism, and space constancy; enhanced iterative design (Schön's Reflective Practitioner) for integration of visual display design; haptic devices; and sonified and integrated visual/auditory environments including virtual environments and community/performance spaces.

Outline of the Seeing Section

As casual observers, we tend to believe that vision is based on representations that are coherent and complete, with everything in the visual field described in great detail. However, changes made during a visual disturbance are found to be difficult to see, arguing against this idea. Instead, it is argued that vision is based upon a more dynamic, "just-in-time" representation, one with deep similarities to the way that users interact with external displays. Several suggestions are put forward as to how these similarities can be harnessed for the design of intelligent display systems that interact with humans in highly effective and novel ways.


Back to main publications list.