A psychophysical investigation of human visual perceptual memory. A study of the retention of colour, spatial frequency and motion visual information by human visual short term memory mechanisms.
AuthorNemes, Vanda A.
SupervisorMcKeefry, Declan J.
KeywordVisual short term memory
; Spatial frequency
; Perceptual memory
; Visual information
The University of Bradford theses are licenced under a Creative Commons Licence.
InstitutionUniversity of Bradford
DepartmentSchool of Optometry and Vision Science
MetadataShow full item record
AbstractThe aim of this thesis was to investigate how visual information is organised in perceptual short term memory, with special interest in colour, spatial frequency and velocity. Previous studies of VSTM have indicated the existence of specific memory mechanisms for visual attributes such as orientation, spatial frequency, velocity, contrast and colour. The retention of information in visual short term memory for these basic visual attributes can be disrupted by the presentation of masking stimuli during inter-stimulus intervals (ISIs), which are outside the range of traditional sensory masking. We exploited this memory masking effect in order to examine the organisation of visual information in VSTM. Four groups of experiments were conducted in which participants carried out a delayed discrimination paradigm that employed a two-alternative forced choice (2-AFC) procedure in conjunction with a method of constant stimuli. The fidelity of VSTM was measured by performance markers such as discrimination thresholds and point of subjective equalities. We have found selective memory masking effects, which serve as further evidence in favour of the modular organisation in VSTM, namely, that human visual perceptual memory is based upon multiple, tuned channels in case of colour, spatial frequency and speed, similar to those found in the earliest stages of visual processing for spatial frequency. Moreover, each of these storage mechanisms are tuned to a relatively narrow range of stimulus parameters that are closely linked to visual discrimination mechanisms. These findings add further support to the view that low-level sensory processing mechanisms form the basis for the retention of colour, spatial frequency and velocity information in perceptual memory. We also found evidence for the broad range of transfer of memory masking effects across spatial location, which indicates more long range, long duration interactions between channels that are likely to rely upon contributions from neural processes located in higher visual areas. In conclusion, the experiments presented in this thesis provide significant insight into the organization of visual information in perceptual short term memory.
Showing items related by title, author, creator and subject.
Central Visual Field Sensitivity Data from Microperimetry with Spatially Dense SamplingAstle, A.T.; Ali, I.; Denniss, Jonathan (2016-12)Microperimetry, also referred to as fundus perimetry or fundus-driven perimetry, enables simultaneous acquisition of visual sensitivity and eye movement data. We present sensitivity data collected from 60 participants with normal vision using gaze-contingent perimetry. A custom designed spatially dense test grid was used to collect data across the visual field within 13° of fixation. These data are supplemental to a study in which we demonstrated a spatial interpolation method that facilitates comparison of acquired data from any set of spatial locations to normative data and thus screening of individuals with both normal and non-foveal fixation (Denniss and Astle, 2016).
The role of peripheral visual cues in planning and controlling movement :|ban investigation of which cues provided by different parts of the visual field influence the execution of movement and how they work to control upper and lower limb motion.Bloj, Marina; Graci, Valentina (University of BradfordBradford School of Optometry and Vision Science, 2013-11-22)Visual cues have previously been classified as visual exproprioceptive, when defining the relative position of the body within the environment and are continuously updated while moving (online), and visual exteroceptive when describing static features of the environment which are typically elaborated offline (feedforward). However peripheral visual cues involved in the control of movement have not previously been clearly defined using this classification. Hence the role played by peripheral visual cues in the planning and/or online control of movement remains unclear. The aim of this thesis was to provide a systematic understanding of the importance of peripheral visual cues in several types of movement, namely overground locomotion, adaptive gait, postural stability and reaching and grasping. 3D motion capture techniques were used to collect limb and whole body kinematics during such movements. Visual peripheral cues were manipulated by visual field occlusion conditions or by the employment of point-lights in a dark room. Results showed that the visual cues provided by different parts of the peripheral visual field are mainly used for online fine tuning of limb trajectory towards a target (either a floor-based obstacle or an object to grasp). The absence of peripheral visual cues while moving disrupted the spatio-temporal dynamic relationship between subject and target and resulted in increased margins of safety between body and target and increased time and variability of several dependent measures. These findings argue in favour of the classification of peripheral visual cues as visual exproprioceptive.
Video extraction for fast content access to MPEG compressed videosJiang, Jianmin; Weng, Y. (2009-06-09)As existing video processing technology is primarily developed in the pixel domain yet digital video is stored in compressed format, any application of those techniques to compressed videos would require decompression. For discrete cosine transform (DCT)-based MPEG compressed videos, the computing cost of standard row-by-row and column-by-column inverse DCT (IDCT) transforms for a block of 8 8 elements requires 4096 multiplications and 4032 additions, although practical implementation only requires 1024 multiplications and 896 additions. In this paper, we propose a new algorithm to extract videos directly from MPEG compressed domain (DCT domain) without full IDCT, which is described in three extraction schemes: 1) video extraction in 2 2 blocks with four coefficients; 2) video extraction in 4 4 blocks with four DCT coefficients; and 3) video extraction in 4 4 blocks with nine DCT coefficients. The computing cost incurred only requires 8 additions and no multiplication for the first scheme, 2 multiplication and 28 additions for the second scheme, and 47 additions (no multiplication) for the third scheme. Extensive experiments were carried out, and the results reveal that: 1) the extracted video maintains competitive quality in terms of visual perception and inspection and 2) the extracted videos preserve the content well in comparison with those fully decompressed ones in terms of histogram measurement. As a result, the proposed algorithm will provide useful tools in bridging the gap between pixel domain and compressed domain to facilitate content analysis with low latency and high efficiency such as those applications in surveillance videos, interactive multimedia, and image processing.