Loading...
Divide-and-conquer based summarization framework for extracting affective video content
Mehmood, Irfan ; Sajjad, M. ; Rho, S. ; Baik, S.W.
Mehmood, Irfan
Sajjad, M.
Rho, S.
Baik, S.W.
Publication Date
2016-01
End of Embargo
Supervisor
Rights
© 2015 Published by Elsevier B.V. Reproduced in accordance with the publisher's self-archiving policy. This manuscript version is made available under the CC-BY-NC-ND 4.0 license.
Peer-Reviewed
Yes
Open Access status
openAccess
Accepted for publication
2015-05-01
Institution
Department
Awarded
Embargo end date
Additional title
Abstract
Recent advances in multimedia technology have led to tremendous increases in the available volume of video data, thereby creating a major requirement for efficient systems to manage such huge data volumes. Video summarization is one of the key techniques for accessing and managing large video libraries. Video summarization can be used to extract the affective contents of a video sequence to generate a concise representation of its content. Human attention models are an efficient means of affective content extraction. Existing visual attention driven summarization frameworks have high computational cost and memory requirements, as well as a lack of efficiency in accurately perceiving human attention. To cope with these issues, we propose a divide-and-conquer based framework for an efficient summarization of big video data. We divide the original video data into shots, where an attention model is computed from each shot in parallel. Viewer's attention is based on multiple sensory perceptions, i.e., aural and visual, as well as the viewer's neuronal signals. The aural attention model is based on the Teager energy, instant amplitude, and instant frequency, whereas the visual attention model employs multi-scale contrast and motion intensity. Moreover, the neuronal attention is computed using the beta-band frequencies of neuronal signals. Next, an aggregated attention curve is generated using an intra- and inter-modality fusion mechanism. Finally, the affective content in each video shot is extracted. The fusion of multimedia and neuronal signals provides a bridge that links the digital representation of multimedia with the viewer’s perceptions. Our experimental results indicate that the proposed shot-detection based divide-and-conquer strategy mitigates the time and computational complexity. Moreover, the proposed attention model provides an accurate reflection of the user preferences and facilitates the extraction of highly affective and personalized summaries.
Version
Accepted manuscript
Citation
Mehmood I, Sajjad M, Rho S et al (2016) Divide-and-conquer based summarization framework for extracting affective video content. Neurocomputing. 174(Part A): 393-403.
Link to publisher’s version
Link to published version
Link to Version of Record
Type
Article