Visuomotor control of step descent : the importance of visual information from the lower visual field in regulating landing control. When descending a step from a stationary standing position or during on-going gait, is online visual information from the lower visual field important in regulating prelanding kinematic and landing mechanic variables?
AuthorTimmis, Matthew A.
SupervisorBuckley, John G.
Elliott, David B.
Lower visual field
Rights© 2010 Timmis, M. A. This work is licensed under a Creative Commons Attribution-Non-Commercial-Share-Alike License (http://creativecommons.org/licenses/by-nc-nd/2.0/uk).
InstitutionUniversity of Bradford
DepartmentDepartment of Engineering Design and Technology
MetadataShow full item record
AbstractThe majority of previous research investigating the role of vision in controlling adaptive gait has predominantly focused on over-ground walking or obstacle negotiation. Thus there is a paucity of literature investigating visuomotor control of step descent. This thesis addressed the importance of the lower visual field (lvf) in regulating step descent landing control, and determined when visual feedback is typically used in regulating landing control prior to / during step descent. When step descents were completed from a stationary starting position, with the lvf occluded or degraded, participants adapted their stepping strategy in a manner consistent with being uncertain regarding the precise location of the foot / lower leg relative to the floor. However, these changes in landing control under conditions of lvf occlusion were made without fundamentally altering stepping strategy. This suggests that participants were able to plan the general stepping strategy when only upper visual field cues were available. When lvf was occluded from either 2 or 1 step(s) prior to descending a step during on-going gait, stepping strategy was only affected when the lvf was occluded in the penultimate step. Findings suggest that lvf cues are acquired in the penultimate step / few seconds prior to descent and provide exproprioceptive information of the foot / lower leg relative to the floor which ensures landing is regulated with increased certainty. Findings also highlight the subtle role of online vision used in the latter portion of step descent to 'fine tune' landing control.
Showing items related by title, author, creator and subject.
The role of peripheral visual cues in planning and controlling movement :|ban investigation of which cues provided by different parts of the visual field influence the execution of movement and how they work to control upper and lower limb motion.Bloj, Marina; Graci, Valentina (University of BradfordBradford School of Optometry and Vision Science, 2013-11-22)Visual cues have previously been classified as visual exproprioceptive, when defining the relative position of the body within the environment and are continuously updated while moving (online), and visual exteroceptive when describing static features of the environment which are typically elaborated offline (feedforward). However peripheral visual cues involved in the control of movement have not previously been clearly defined using this classification. Hence the role played by peripheral visual cues in the planning and/or online control of movement remains unclear. The aim of this thesis was to provide a systematic understanding of the importance of peripheral visual cues in several types of movement, namely overground locomotion, adaptive gait, postural stability and reaching and grasping. 3D motion capture techniques were used to collect limb and whole body kinematics during such movements. Visual peripheral cues were manipulated by visual field occlusion conditions or by the employment of point-lights in a dark room. Results showed that the visual cues provided by different parts of the peripheral visual field are mainly used for online fine tuning of limb trajectory towards a target (either a floor-based obstacle or an object to grasp). The absence of peripheral visual cues while moving disrupted the spatio-temporal dynamic relationship between subject and target and resulted in increased margins of safety between body and target and increased time and variability of several dependent measures. These findings argue in favour of the classification of peripheral visual cues as visual exproprioceptive.
Central Visual Field Sensitivity Data from Microperimetry with Spatially Dense SamplingAstle, A.T.; Ali, I.; Denniss, Jonathan (2016-12)Microperimetry, also referred to as fundus perimetry or fundus-driven perimetry, enables simultaneous acquisition of visual sensitivity and eye movement data. We present sensitivity data collected from 60 participants with normal vision using gaze-contingent perimetry. A custom designed spatially dense test grid was used to collect data across the visual field within 13° of fixation. These data are supplemental to a study in which we demonstrated a spatial interpolation method that facilitates comparison of acquired data from any set of spatial locations to normative data and thus screening of individuals with both normal and non-foveal fixation (Denniss and Astle, 2016).
Video extraction for fast content access to MPEG compressed videosJiang, Jianmin; Weng, Y. (2009-06-09)As existing video processing technology is primarily developed in the pixel domain yet digital video is stored in compressed format, any application of those techniques to compressed videos would require decompression. For discrete cosine transform (DCT)-based MPEG compressed videos, the computing cost of standard row-by-row and column-by-column inverse DCT (IDCT) transforms for a block of 8 8 elements requires 4096 multiplications and 4032 additions, although practical implementation only requires 1024 multiplications and 896 additions. In this paper, we propose a new algorithm to extract videos directly from MPEG compressed domain (DCT domain) without full IDCT, which is described in three extraction schemes: 1) video extraction in 2 2 blocks with four coefficients; 2) video extraction in 4 4 blocks with four DCT coefficients; and 3) video extraction in 4 4 blocks with nine DCT coefficients. The computing cost incurred only requires 8 additions and no multiplication for the first scheme, 2 multiplication and 28 additions for the second scheme, and 47 additions (no multiplication) for the third scheme. Extensive experiments were carried out, and the results reveal that: 1) the extracted video maintains competitive quality in terms of visual perception and inspection and 2) the extracted videos preserve the content well in comparison with those fully decompressed ones in terms of histogram measurement. As a result, the proposed algorithm will provide useful tools in bridging the gap between pixel domain and compressed domain to facilitate content analysis with low latency and high efficiency such as those applications in surveillance videos, interactive multimedia, and image processing.