Divide-and-conquer based summarization framework for extracting affective video content
View/ Open
Mehmood_Neurocomputing (1.422Mb)
Download
Publication date
2016-01Keyword
Affective content analysisBig video data
Divide-and-conquer-architecture
Human attention modeling
Rights
© 2015 Published by Elsevier B.V. Reproduced in accordance with the publisher's self-archiving policy. This manuscript version is made available under the CC-BY-NC-ND 4.0 license.Peer-Reviewed
YesOpen Access status
openAccess
Metadata
Show full item recordAbstract
Recent advances in multimedia technology have led to tremendous increases in the available volume of video data, thereby creating a major requirement for efficient systems to manage such huge data volumes. Video summarization is one of the key techniques for accessing and managing large video libraries. Video summarization can be used to extract the affective contents of a video sequence to generate a concise representation of its content. Human attention models are an efficient means of affective content extraction. Existing visual attention driven summarization frameworks have high computational cost and memory requirements, as well as a lack of efficiency in accurately perceiving human attention. To cope with these issues, we propose a divide-and-conquer based framework for an efficient summarization of big video data. We divide the original video data into shots, where an attention model is computed from each shot in parallel. Viewer's attention is based on multiple sensory perceptions, i.e., aural and visual, as well as the viewer's neuronal signals. The aural attention model is based on the Teager energy, instant amplitude, and instant frequency, whereas the visual attention model employs multi-scale contrast and motion intensity. Moreover, the neuronal attention is computed using the beta-band frequencies of neuronal signals. Next, an aggregated attention curve is generated using an intra- and inter-modality fusion mechanism. Finally, the affective content in each video shot is extracted. The fusion of multimedia and neuronal signals provides a bridge that links the digital representation of multimedia with the viewer’s perceptions. Our experimental results indicate that the proposed shot-detection based divide-and-conquer strategy mitigates the time and computational complexity. Moreover, the proposed attention model provides an accurate reflection of the user preferences and facilitates the extraction of highly affective and personalized summaries.Version
Accepted manuscriptCitation
Mehmood I, Sajjad M, Rho S et al (2016) Divide-and-conquer based summarization framework for extracting affective video content. Neurocomputing. 174(Part A): 393-403.Link to Version of Record
https://doi.org/10.1016/j.neucom.2015.05.126Type
Articleae974a485f413a2113503eed53cd6c53
https://doi.org/10.1016/j.neucom.2015.05.126