Visuomotor control of step descent : the importance of visual information from the lower visual field in regulating landing control. When descending a step from a stationary standing position or during on-going gait, is online visual information from the lower visual field important in regulating prelanding kinematic and landing mechanic variables?
View/ Open
Ph.D with corrections 23.6.10.pdf (2.333Mb)
Download
Publication date
2010-09-29T15:54:08ZAuthor
Timmis, Matthew A.Keyword
BiomechanicsHuman movement
Locomotion
Adaptive gait
Step decent
Vision
Lower visual field
Online
Feedforward
Rights
The University of Bradford theses are licenced under a Creative Commons Licence.
Institution
University of BradfordDepartment
Department of Engineering Design and TechnologyAwarded
2010
Metadata
Show full item recordAbstract
The majority of previous research investigating the role of vision in controlling adaptive gait has predominantly focused on over-ground walking or obstacle negotiation. Thus there is a paucity of literature investigating visuomotor control of step descent. This thesis addressed the importance of the lower visual field (lvf) in regulating step descent landing control, and determined when visual feedback is typically used in regulating landing control prior to / during step descent. When step descents were completed from a stationary starting position, with the lvf occluded or degraded, participants adapted their stepping strategy in a manner consistent with being uncertain regarding the precise location of the foot / lower leg relative to the floor. However, these changes in landing control under conditions of lvf occlusion were made without fundamentally altering stepping strategy. This suggests that participants were able to plan the general stepping strategy when only upper visual field cues were available. When lvf was occluded from either 2 or 1 step(s) prior to descending a step during on-going gait, stepping strategy was only affected when the lvf was occluded in the penultimate step. Findings suggest that lvf cues are acquired in the penultimate step / few seconds prior to descent and provide exproprioceptive information of the foot / lower leg relative to the floor which ensures landing is regulated with increased certainty. Findings also highlight the subtle role of online vision used in the latter portion of step descent to 'fine tune' landing control.Type
ThesisQualification name
PhDCollections
Related items
Showing items related by title, author, creator and subject.
-
The role of peripheral visual cues in planning and controlling movement :|ban investigation of which cues provided by different parts of the visual field influence the execution of movement and how they work to control upper and lower limb motion.Bloj, Marina; Graci, Valentina (University of BradfordBradford School of Optometry and Vision Science, 2013-11-22)Visual cues have previously been classified as visual exproprioceptive, when defining the relative position of the body within the environment and are continuously updated while moving (online), and visual exteroceptive when describing static features of the environment which are typically elaborated offline (feedforward). However peripheral visual cues involved in the control of movement have not previously been clearly defined using this classification. Hence the role played by peripheral visual cues in the planning and/or online control of movement remains unclear. The aim of this thesis was to provide a systematic understanding of the importance of peripheral visual cues in several types of movement, namely overground locomotion, adaptive gait, postural stability and reaching and grasping. 3D motion capture techniques were used to collect limb and whole body kinematics during such movements. Visual peripheral cues were manipulated by visual field occlusion conditions or by the employment of point-lights in a dark room. Results showed that the visual cues provided by different parts of the peripheral visual field are mainly used for online fine tuning of limb trajectory towards a target (either a floor-based obstacle or an object to grasp). The absence of peripheral visual cues while moving disrupted the spatio-temporal dynamic relationship between subject and target and resulted in increased margins of safety between body and target and increased time and variability of several dependent measures. These findings argue in favour of the classification of peripheral visual cues as visual exproprioceptive.
-
Central Visual Field Sensitivity Data from Microperimetry with Spatially Dense SamplingAstle, A.T.; Ali, I.; Denniss, Jonathan (2016-12)Microperimetry, also referred to as fundus perimetry or fundus-driven perimetry, enables simultaneous acquisition of visual sensitivity and eye movement data. We present sensitivity data collected from 60 participants with normal vision using gaze-contingent perimetry. A custom designed spatially dense test grid was used to collect data across the visual field within 13° of fixation. These data are supplemental to a study in which we demonstrated a spatial interpolation method that facilitates comparison of acquired data from any set of spatial locations to normative data and thus screening of individuals with both normal and non-foveal fixation (Denniss and Astle, 2016)[1].
-
Natural scene classification, annotation and retrieval. Developing different approaches for semantic scene modelling based on Bag of Visual Words.Neagu, Daniel; Cowling, Peter I.; Alqasrawi, Yousef T. N. (University of BradfordDepartment of Computing, 2013-04-05)With the availability of inexpensive hardware and software, digital imaging has become an important medium of communication in our daily lives. A huge amount of digital images are being collected and become available through the internet and stored in various fields such as personal image collections, medical imaging, digital arts etc. Therefore, it is important to make sure that images are stored, searched and accessed in an efficient manner. The use of bag of visual words (BOW) model for modelling images based on local invariant features computed at interest point locations has become a standard choice for many computer vision tasks. Based on this promising model, this thesis investigates three main problems: natural scene classification, annotation and retrieval. Given an image, the task is to design a system that can determine to which class that image belongs to (classification), what semantic concepts it contain (annotation) and what images are most similar to (retrieval). This thesis contributes to scene classification by proposing a weighting approach, named keypoints density-based weighting method (KDW), to control the fusion of colour information and bag of visual words on spatial pyramid layout in a unified framework. Different configurations of BOW, integrated visual vocabularies and multiple image descriptors are investigated and analyzed. The proposed approaches are extensively evaluated over three well-known scene classification datasets with 6, 8 and 15 scene categories using 10-fold cross validation. The second contribution in this thesis, the scene annotation task, is to explore whether the integrated visual vocabularies generated for scene classification can be used to model the local semantic information of natural scenes. In this direction, image annotation is considered as a classification problem where images are partitioned into 10x10 fixed grid and each block, represented by BOW and different image descriptors, is classified into one of predefined semantic classes. An image is then represented by counting the percentage of every semantic concept detected in the image. Experimental results on 6 scene categories demonstrate the effectiveness of the proposed approach. Finally, this thesis further explores, with an extensive experimental work, the use of different configurations of the BOW for natural scene retrieval.