Publication date
2012Peer-Reviewed
YesOpen Access status
closedAccess
Metadata
Show full item recordAbstract
This book provides a review of the state of the art in computer graphics, visualization, and visual analytics by researchers and developers who are closely involved in pioneering the latest advances in the field. It is a unique presentation of multi-disciplinary aspects in visualization and visual analytics, architecture and displays, augmented reality, the use of color, user interfaces and cognitive aspects, and technology transfer. It provides readers with insights into the latest developments in areas such as new displays and new display processors, new collaboration technologies, the role of visual, multimedia, and multimodal user interfaces, visual analysis at extreme scale, and adaptive visualization.Version
No full-text in the repositoryCitation
Dill J, Earnshaw RA, Kasik D, Vince JA and Wong PC (Eds.)(2012) Expanding the Frontiers of Visual Analytics and Visualization. London: Springer-Verlag.Link to Version of Record
https://doi.org/10.1007/978-1-4471-2804-5Type
Bookae974a485f413a2113503eed53cd6c53
https://doi.org/10.1007/978-1-4471-2804-5
Scopus Count
Collections
Related items
Showing items related by title, author, creator and subject.
-
The role of peripheral visual cues in planning and controlling movement :|ban investigation of which cues provided by different parts of the visual field influence the execution of movement and how they work to control upper and lower limb motion.Bloj, Marina; Graci, Valentina (University of BradfordBradford School of Optometry and Vision Science, 2013-11-22)Visual cues have previously been classified as visual exproprioceptive, when defining the relative position of the body within the environment and are continuously updated while moving (online), and visual exteroceptive when describing static features of the environment which are typically elaborated offline (feedforward). However peripheral visual cues involved in the control of movement have not previously been clearly defined using this classification. Hence the role played by peripheral visual cues in the planning and/or online control of movement remains unclear. The aim of this thesis was to provide a systematic understanding of the importance of peripheral visual cues in several types of movement, namely overground locomotion, adaptive gait, postural stability and reaching and grasping. 3D motion capture techniques were used to collect limb and whole body kinematics during such movements. Visual peripheral cues were manipulated by visual field occlusion conditions or by the employment of point-lights in a dark room. Results showed that the visual cues provided by different parts of the peripheral visual field are mainly used for online fine tuning of limb trajectory towards a target (either a floor-based obstacle or an object to grasp). The absence of peripheral visual cues while moving disrupted the spatio-temporal dynamic relationship between subject and target and resulted in increased margins of safety between body and target and increased time and variability of several dependent measures. These findings argue in favour of the classification of peripheral visual cues as visual exproprioceptive.
-
Central Visual Field Sensitivity Data from Microperimetry with Spatially Dense SamplingAstle, A.T.; Ali, I.; Denniss, Jonathan (2016-12)Microperimetry, also referred to as fundus perimetry or fundus-driven perimetry, enables simultaneous acquisition of visual sensitivity and eye movement data. We present sensitivity data collected from 60 participants with normal vision using gaze-contingent perimetry. A custom designed spatially dense test grid was used to collect data across the visual field within 13° of fixation. These data are supplemental to a study in which we demonstrated a spatial interpolation method that facilitates comparison of acquired data from any set of spatial locations to normative data and thus screening of individuals with both normal and non-foveal fixation (Denniss and Astle, 2016)[1].
-
Natural scene classification, annotation and retrieval. Developing different approaches for semantic scene modelling based on Bag of Visual Words.Neagu, Daniel; Cowling, Peter I.; Alqasrawi, Yousef T. N. (University of BradfordDepartment of Computing, 2013-04-05)With the availability of inexpensive hardware and software, digital imaging has become an important medium of communication in our daily lives. A huge amount of digital images are being collected and become available through the internet and stored in various fields such as personal image collections, medical imaging, digital arts etc. Therefore, it is important to make sure that images are stored, searched and accessed in an efficient manner. The use of bag of visual words (BOW) model for modelling images based on local invariant features computed at interest point locations has become a standard choice for many computer vision tasks. Based on this promising model, this thesis investigates three main problems: natural scene classification, annotation and retrieval. Given an image, the task is to design a system that can determine to which class that image belongs to (classification), what semantic concepts it contain (annotation) and what images are most similar to (retrieval). This thesis contributes to scene classification by proposing a weighting approach, named keypoints density-based weighting method (KDW), to control the fusion of colour information and bag of visual words on spatial pyramid layout in a unified framework. Different configurations of BOW, integrated visual vocabularies and multiple image descriptors are investigated and analyzed. The proposed approaches are extensively evaluated over three well-known scene classification datasets with 6, 8 and 15 scene categories using 10-fold cross validation. The second contribution in this thesis, the scene annotation task, is to explore whether the integrated visual vocabularies generated for scene classification can be used to model the local semantic information of natural scenes. In this direction, image annotation is considered as a classification problem where images are partitioned into 10x10 fixed grid and each block, represented by BOW and different image descriptors, is classified into one of predefined semantic classes. An image is then represented by counting the percentage of every semantic concept detected in the image. Experimental results on 6 scene categories demonstrate the effectiveness of the proposed approach. Finally, this thesis further explores, with an extensive experimental work, the use of different configurations of the BOW for natural scene retrieval.