BRADFORD SCHOLARS

    • Sign in
    View Item 
    •   Bradford Scholars
    • Management and Law
    • Management and Law Publications
    • View Item
    •   Bradford Scholars
    • Management and Law
    • Management and Law Publications
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Bradford ScholarsCommunitiesAuthorsTitlesSubjectsPublication DateThis CollectionAuthorsTitlesSubjectsPublication Date

    My Account

    Sign in

    HELP

    Bradford Scholars FAQsCopyright Fact SheetPolicies Fact SheetDeposit Terms and ConditionsDigital Preservation Policy

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    A deep multi-modal neural network for informative Twitter content classification during emergencies

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    View/Open
    Rana_Annals_of_Operations_Research.pdf (877.0Kb)
    Download
    Publication date
    2020
    Author
    Kumar, A.
    Singh, J.P.
    Dwivedi, Y.K.
    Rana, Nripendra P.
    Keyword
    Disaster
    Twitter
    LSTM
    VGG-16
    Social media
    Tweets
    Rights
    © Springer Science+Business Media, LLC, part of Springer Nature 2020. Reproduced in accordance with the publisher's self-archiving policy. The final publication is available at Springer via https://doi.org/10.1007/s10479-020-03514-x
    Peer-Reviewed
    Yes
    
    Metadata
    Show full item record
    Abstract
    People start posting tweets containing texts, images, and videos as soon as a disaster hits an area. The analysis of these disaster-related tweet texts, images, and videos can help humanitarian response organizations in better decision-making and prioritizing their tasks. Finding the informative contents which can help in decision making out of the massive volume of Twitter content is a difficult task and require a system to filter out the informative contents. In this paper, we present a multi-modal approach to identify disaster-related informative content from the Twitter streams using text and images together. Our approach is based on long-short-term-memory (LSTM) and VGG-16 networks that show significant improvement in the performance, as evident from the validation result on seven different disaster-related datasets. The range of F1-score varied from 0.74 to 0.93 when tweet texts and images used together, whereas, in the case of only tweet text, it varies from 0.61 to 0.92. From this result, it is evident that the proposed multi-modal system is performing significantly well in identifying disaster-related informative social media contents.
    URI
    http://hdl.handle.net/10454/17558
    Version
    Accepted manuscript
    Citation
    Kumar A, Singh JP, Dwivedi YK et al (2020) A deep multi-modal neural network for informative Twitter content classification during emergencies. Annals of Operations Research. Accepted for Publication.
    Link to publisher’s version
    https://doi.org/10.1007/s10479-020-03514-x
    Type
    Article
    Collections
    Management and Law Publications

    entitlement

     
    DSpace software (copyright © 2002 - 2023)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.