Effects of gaze strategy on standing postural stability in older multifocal wearers
MetadataShow full item record
AbstractBACKGROUND: Postural instability in older people is associated with an increased risk of falling. This experiment investigated the effects of different gaze strategies on postural stability in older people, when using distance single-vision compared with multifocal (progressive addition lens and bifocal) spectacles. METHODS: Eighteen healthy older habitual multifocal spectacle-wearers (mean age 72.1 +/- 4.0 years) participated in a randomised, cross-over study. Postural stability during quiet standing was assessed as the root mean square excursion in the centre of pressure (RMS-COP) in the antero-posterior direction. Ground reaction force data were collected (for 30 seconds), while subjects viewed one of two visual targets (one square metre) of different spatial frequencies and contrasts, while wearing either distance single-vision or multifocal (progressive addition and bifocal) spectacles. The visual targets were positioned either ahead at eye-level or on the ground (viewing distance 2.06 metres) and viewed under the following head-gaze conditions; 'head neutral-gaze forward', 'head flexed-gaze down' and 'head neutral-gaze down'. RESULTS: The type of spectacles worn or the target viewed had no significant effect on postural stability but postural stability deteriorated (antero-posterior RMS-COP excursion increased) in the 'head neutral-gaze down' compared with the 'head flexed-gaze down' and 'head neutral-gaze forward' conditions (5.9, 5.5 and 5.0 mm respectively, p < 0.001). CONCLUSIONS: Multifocal use had no effect on standing postural stability. Irrespective of spectacles worn, when fixating a visual target positioned at ground level, postural stability was better in the 'head flexed-gaze down' condition compared with the 'head neutral-gaze down' condition. A useful strategy to reduce falling in the older person might be to advise multifocal and distance single-vision spectacle-wearers to flex their heads rather than just lower their eyes when looking downwards.
CitationJohnson, L., Elliott, D. B. and Buckley, J. G. (2009) Effects of gaze strategy on standing postural stability in older multifocal wearers. Clin Exp Optom, 92 (1), 19-26.
Link to publisher’s versionhttp://dx.doi.org/10.1111/j.1444-0938.2008.00304.x
Showing items related by title, author, creator and subject.
Optimal edge filters explain human blur detectionMcIlhagga, William H.; May, K.A. (2012)Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur.
Optimal integration of shading and binocular disparity for depth perceptionLovell, P.G.; Bloj, Marina; Harris, J.M. (2012)We explore the relative utility of shape from shading and binocular disparity for depth perception. Ray-traced images either featured a smooth surface illuminated from above (shading-only) or were defined by small dots (disparity-only). Observers judged which of a pair of smoothly curved convex objects had most depth. The shading cue was around half as reliable as the rich disparity information for depth discrimination. Shading- and disparity-defined cues where combined by placing dots in the stimulus image, superimposed upon the shaded surface, resulting in veridical shading and binocular disparity. Independently varying the depth delivered by each channel allowed creation of conflicting disparity-defined and shading-defined depth. We manipulated the reliability of the disparity information by adding disparity noise. As noise levels in the disparity channel were increased, perceived depths and variances shifted toward those of the now more reliable shading cue. Several different models of cue combination were applied to the data. Perceived depths and variances were well predicted by a classic maximum likelihood estimator (MLE) model of cue integration, for all but one observer. We discuss the extent to which MLE is the most parsimonious model to account for observer performance.
Recalibration of perceived time across sensory modalitiesHanson, James Vincent Michael; Heron, James; Whitaker, David J. (2008)When formulating an estimate of event time, the human sensory system has been shown to possess a degree of perceptual flexibility. Specifically, the perceived relative timing of auditory and visual stimuli is, to some extent, a product of recent experience. It has been suggested that this form of sensory recalibration may be peculiar to the audiovisual domain. Here we investigate how adaptation to sensory asynchrony influences the perceived temporal order of audiovisual, audiotactile and visuotactile stimulus pairs. Our data show that a brief period of repeated exposure to asynchrony in any of these sensory pairings results in marked changes in subsequent temporal order judgments: the point of perceived simultaneity shifts toward the level of adaptation asynchrony. We find that the size and nature of this shift is very similar in all three pairings and that sensitivity to asynchrony is unaffected by the adaptation process. In light of these findings we suggest that a single supramodal mechanism may be responsible for the observed recalibration of multisensory perceived time.