MetadataShow full item record
AbstractThis book provides a review of the state of the art in computer graphics, visualization, and visual analytics by researchers and developers who are closely involved in pioneering the latest advances in the field. It is a unique presentation of multi-disciplinary aspects in visualization and visual analytics, architecture and displays, augmented reality, the use of color, user interfaces and cognitive aspects, and technology transfer. It provides readers with insights into the latest developments in areas such as new displays and new display processors, new collaboration technologies, the role of visual, multimedia, and multimodal user interfaces, visual analysis at extreme scale, and adaptive visualization.
VersionNo full-text available in the repository
CitationDill J, Earnshaw RA, Kasik D, Vince JA and Wong PC (Eds.)(2012) Expanding the Frontiers of Visual Analytics and Visualization. London: Springer-Verlag.
Link to publisher’s versionhttp://www.springer.com/us/book/9781447128038
Showing items related by title, author, creator and subject.
Central Visual Field Sensitivity Data from Microperimetry with Spatially Dense SamplingAstle, A.T.; Ali, I.; Denniss, Jonathan (2016-12)Microperimetry, also referred to as fundus perimetry or fundus-driven perimetry, enables simultaneous acquisition of visual sensitivity and eye movement data. We present sensitivity data collected from 60 participants with normal vision using gaze-contingent perimetry. A custom designed spatially dense test grid was used to collect data across the visual field within 13° of fixation. These data are supplemental to a study in which we demonstrated a spatial interpolation method that facilitates comparison of acquired data from any set of spatial locations to normative data and thus screening of individuals with both normal and non-foveal fixation (Denniss and Astle, 2016).
The role of peripheral visual cues in planning and controlling movement :|ban investigation of which cues provided by different parts of the visual field influence the execution of movement and how they work to control upper and lower limb motion.Bloj, Marina; Graci, Valentina (University of BradfordBradford School of Optometry and Vision Science, 2013-11-22)Visual cues have previously been classified as visual exproprioceptive, when defining the relative position of the body within the environment and are continuously updated while moving (online), and visual exteroceptive when describing static features of the environment which are typically elaborated offline (feedforward). However peripheral visual cues involved in the control of movement have not previously been clearly defined using this classification. Hence the role played by peripheral visual cues in the planning and/or online control of movement remains unclear. The aim of this thesis was to provide a systematic understanding of the importance of peripheral visual cues in several types of movement, namely overground locomotion, adaptive gait, postural stability and reaching and grasping. 3D motion capture techniques were used to collect limb and whole body kinematics during such movements. Visual peripheral cues were manipulated by visual field occlusion conditions or by the employment of point-lights in a dark room. Results showed that the visual cues provided by different parts of the peripheral visual field are mainly used for online fine tuning of limb trajectory towards a target (either a floor-based obstacle or an object to grasp). The absence of peripheral visual cues while moving disrupted the spatio-temporal dynamic relationship between subject and target and resulted in increased margins of safety between body and target and increased time and variability of several dependent measures. These findings argue in favour of the classification of peripheral visual cues as visual exproprioceptive.
Video extraction for fast content access to MPEG compressed videosJiang, Jianmin; Weng, Y. (2009-06-09)As existing video processing technology is primarily developed in the pixel domain yet digital video is stored in compressed format, any application of those techniques to compressed videos would require decompression. For discrete cosine transform (DCT)-based MPEG compressed videos, the computing cost of standard row-by-row and column-by-column inverse DCT (IDCT) transforms for a block of 8 8 elements requires 4096 multiplications and 4032 additions, although practical implementation only requires 1024 multiplications and 896 additions. In this paper, we propose a new algorithm to extract videos directly from MPEG compressed domain (DCT domain) without full IDCT, which is described in three extraction schemes: 1) video extraction in 2 2 blocks with four coefficients; 2) video extraction in 4 4 blocks with four DCT coefficients; and 3) video extraction in 4 4 blocks with nine DCT coefficients. The computing cost incurred only requires 8 additions and no multiplication for the first scheme, 2 multiplication and 28 additions for the second scheme, and 47 additions (no multiplication) for the third scheme. Extensive experiments were carried out, and the results reveal that: 1) the extracted video maintains competitive quality in terms of visual perception and inspection and 2) the extracted videos preserve the content well in comparison with those fully decompressed ones in terms of histogram measurement. As a result, the proposed algorithm will provide useful tools in bridging the gap between pixel domain and compressed domain to facilitate content analysis with low latency and high efficiency such as those applications in surveillance videos, interactive multimedia, and image processing.