Now showing items 1-20 of 1988

    • Controlling the material removal and roughness of Inconel 718 in laser machining

      Ahmed, N.; Rafaqat, M.; Pervaiz, S.; Umer, U.; Alkhalefa, H.; Baloch, Muhammad A.S.; Mian, S.H. (2019-05-16)
      Nickel alloys including Inconel 718 are considered as challenging materials for machining. Laser beam machining could be a promising choice to deal with such materials for simple to complex machining features. The machining accuracy is mainly dependent on the rate of material removal per laser scan. Because of the involvement of many laser parameters and complexity of the machining mechanism it is not always simple to achieve machining with desired accuracy. Actual machining depth extremely varies from very low to aggressively high values with reference to the designed depth. Thus, a research is needed to be carried out to control the process parameters to get actual material removal rate (MRRact) equals to the theoretical material removal rate (MRRth) with minimum surface roughness (SR) of the machined surfaces. In this study, five important laser parameters have been used to investigate their effects on MRR and SR. Statistical analysis are performed to identify the significant parameters with their strength of effects. Mathematical models have been developed and validated to predict the machining responses. Optimal set of laser parameters have also been proposed and confirmed to achieve the actual MRR close to the designed MRR (MRR% = 100.1%) with minimum surface roughness (Ra = 2.67 µm).
    • Performance evaluation of metamodelling methods for engineering problems: towards a practitioner guide

      Kianifar, Mohammed R.; Campean, I. Felician (Springer, 2019-07)
      Metamodelling or surrogate modelling techniques are frequently used across the engineering disciplines in conjunction with expensive simulation models or physical experiments. With the proliferation of metamodeling techniques developed to provide enhanced performance for specific problems, and the wide availability of a diverse choice of tools in engineering software packages, the engineering task of selecting a robust metamodeling technique for practical problems is still a challenge. This research introduces a framework for describing the typology of engineering problems, in terms of dimensionality and complexity, and the modelling conditions, reflecting the noisiness of the signals and the affordability of sample sizes, and on this basis presents a systematic evaluation of the performance of frequently used metamodeling techniques. A set of metamodeling techniques, selected based on their reported use for engineering problems (i.e. Polynomial, Radial Basis Function, and Kriging), were systematically evaluated in terms of accuracy and robustness against a carefully assembled set of 18 test functions covering different types of problems, sampling conditions and noise conditions. A set of four real-world engineering case studies covering both computer simulation and physical experiments were also analysed as validation tests for the proposed guidelines. The main conclusions drawn from the study are that Kriging model with Matérn 5/2 correlation function performs consistently well across different problem types with smooth (i.e. not noisy) data, while Kriging model with Matérn 3/2 correlation function provides robust performance under noisy conditions, except for the very high noise conditions, where the Kriging model with nugget appears to provide better models. These results provide engineering practitioners with a guide for the choice of a metamodeling technique for problem types and modelling conditions represented in the study, whereas the evaluation framework and benchmarking problems set will be useful for researchers conducting similar studies.
    • Automatic modulation classification using interacting multiple model - Kalman filter for channel estimation

      Abdul Salam, Ahmed O.; Sheriff, Ray E.; Hu, Yim Fun; Al-Araji, S.R.; Mezher, K. (Institute of Electrical and Electronic Engineers, 2019)
      A rigorous model for automatic modulation classification (AMC) in cognitive radio (CR) systems is proposed in this paper. This is achieved by exploiting the Kalman filter (KF) integrated with an adaptive interacting multiple model (IMM) for resilient estimation of the channel state information (CSI). A novel approach is proposed, in adding up the squareroot singular values (SRSV) of the decomposed channel using the singular value decompositions (SVD) algorithm. This new scheme, termed Frobenius eigenmode transmission (FET), is chiefly intended to maintain the total power of all individual effective eigenmodes, as opposed to keeping only the dominant one. The analysis is applied over multiple-input multiple-output (MIMO) antennas in combination with a Rayleigh fading channel using a quasi likelihood ratio test (QLRT) algorithm for AMC. The expectation-maximization (EM) is employed for recursive computation of the underlying estimation and classification algorithms. Novel simulations demonstrate the advantages of the combined IMM-KF structure when compared to the perfectly known channel and maximum likelihood estimate (MLE), in terms of achieving the targeted optimal performance with the desirable benefit of less computational complexity loads.
    • A comparison of flare forecasting methods. II. Benchmarks, metrics and performance results for operational solar flare forecasting systems

      Leka, K.D.; Park, S-H.; Kusano, K.; Andries, J.; Barnes, G.; Bingham, S.; Bloomfield, D.S.; McCloskey, A.E.; Delouille, V.; Falconer, D.; et al. (2019)
      Solar flares are extremely energetic phenomena in our Solar System. Their impulsive, often drastic radiative increases, in particular at short wavelengths, bring immediate impacts that motivate solar physics and space weather research to understand solar flares to the point of being able to forecast them. As data and algorithms improve dramatically, questions must be asked concerning how well the forecasting performs; crucially, we must ask how to rigorously measure performance in order to critically gauge any improvements. Building upon earlier-developed methodology (Barnes et al. 2016, Paper I), international representatives of regional warning centers and research facilities assembled in 2017 at the Institute for Space-Earth Environmental Research, Nagoya University, Japan to – for the first time – directly compare the performance of operational solar flare forecasting methods. Multiple quantitative evaluation metrics are employed, with focus and discussion on evaluation methodologies given the restrictions of operational forecasting. Numerous methods performed consistently above the “no skill” level, although which method scored top marks is decisively a function of flare event definition and the metric used; there was no single winner. Following in this paper series we ask why the performances differ by examining implementation details (Leka et al. 2019, Paper III), and then we present a novel analysis method to evaluate temporal patterns of forecasting errors in (Park et al. 2019, Paper IV). With these works, this team presents a well-defined and robust methodology for evaluating solar flare forecasting methods in both research and operational frameworks, and today’s performance benchmarks against which improvements and new methods may be compared.
    • A comparison of flare forecasting methods. III. Systematic behaviors of operational solar flare forecasting systems

      Leka, K.D.; Park, S-H.; Kusano, K.; Andries, J.; Barnes, G.; Bingham, S.; Bloomfield, D.S.; McCloskey, A.E.; Delouille, V.; Falcomer, D.; et al. (2019)
      A workshop was recently held at Nagoya University (31 October – 02 November 2017), sponsored by the Center for International Collaborative Research, at the Institute for Space-Earth Environmental Research, Nagoya University, Japan, to quantitatively compare the performance of today’s operational solar flare forecasting facilities. Building upon Paper I of this series (Barnes et al. 2016), in Paper II (Leka et al. 2019) we described the participating methods for this latest comparison effort, the evaluation methodology, and presented quantitative comparisons. In this paper we focus on the behavior and performance of the methods when evaluated in the context of broad implementation differences. Acknowledging the short testing interval available and the small number of methods available, we do find that forecast performance: 1) appears to improve by including persistence or prior flare activity, region evolution, and a human “forecaster in the loop”; 2) is hurt by restricting data to disk-center observations; 3) may benefit from long-term statistics, but mostly when then combined with modern data sources and statistical approaches. These trends are arguably weak and must be viewed with numerous caveats, as discussed both here and in Paper II. Following this present work, we present in Paper IV a novel analysis method to evaluate temporal patterns of forecasting errors of both types (i.e., misses and false alarms; Park et al. 2019). Hence, most importantly, with this series of papers we demonstrate the techniques for facilitating comparisons in the interest of establishing performance-positive methodologies.
    • Saliency-weighted graphs for efficient visual content description and their applications in real-time image retrieval systems

      Ahmad, J.; Sajjad, M.; Mehmood, Irfan; Rho, S.; Baik, S.W. (2017-09)
      The exponential growth in the volume of digital image databases is making it increasingly difficult to retrieve relevant information from them. Efficient retrieval systems require distinctive features extracted from visually rich contents, represented semantically in a human perception-oriented manner. This paper presents an efficient framework to model image contents as an undirected attributed relational graph, exploiting color, texture, layout, and saliency information. The proposed method encodes salient features into this rich representative model without requiring any segmentation or clustering procedures, reducing the computational complexity. In addition, an efficient graph-matching procedure implemented on specialized hardware makes it more suitable for real-time retrieval applications. The proposed framework has been tested on three publicly available datasets, and the results prove its superiority in terms of both effectiveness and efficiency in comparison with other state-of-the-art schemes.
    • UAV based wilt detection system via convolutional neural networks

      Dang, L.M.; Hassan, S.I.; Suhyeon, I.; Sangaiah, A.K.; Mehmood, Irfan; Rho, S.; Seo, S.; Moon, H. (2019)
      The significant role of plants can be observed through the dependency of animals and humans on them. Oxygen, materials, food and the beauty of the world are contributed by plants. Climate change, the decrease in pollinators, and plant diseases are causing a significant decline in both quality and coverage ratio of the plants and crops on a global scale. In developed countries, above 80 percent of rural production is produced by sharecropping. However, due to widespread diseases in plants, yields are reported to have declined by more than a half. These diseases are identified and diagnosed by the agricultural and forestry department. Manual inspection on a large area of fields requires a huge amount of time and effort, thereby reduces the effectiveness significantly. To counter this problem, we propose an automatic disease detection and classification method in radish fields by using a camera attached to an unmanned aerial vehicle (UAV) to capture high quality images from the fields and analyze them by extracting both color and texture features, then we used K-means clustering to filter radish regions and feeds them into a fine-tuned GoogleNet to detect Fusarium wilt of radish efficiently at early stage and allow the authorities to take timely action which ensures the food safety for current and future generations.
    • Divide-and-conquer based summarization framework for extracting affective video content

      Mehmood, Irfan; Sajjad, M.; Rho, S.; Baik, S.W. (2016-01)
      Recent advances in multimedia technology have led to tremendous increases in the available volume of video data, thereby creating a major requirement for efficient systems to manage such huge data volumes. Video summarization is one of the key techniques for accessing and managing large video libraries. Video summarization can be used to extract the affective contents of a video sequence to generate a concise representation of its content. Human attention models are an efficient means of affective content extraction. Existing visual attention driven summarization frameworks have high computational cost and memory requirements, as well as a lack of efficiency in accurately perceiving human attention. To cope with these issues, we propose a divide-and-conquer based framework for an efficient summarization of big video data. We divide the original video data into shots, where an attention model is computed from each shot in parallel. Viewer's attention is based on multiple sensory perceptions, i.e., aural and visual, as well as the viewer's neuronal signals. The aural attention model is based on the Teager energy, instant amplitude, and instant frequency, whereas the visual attention model employs multi-scale contrast and motion intensity. Moreover, the neuronal attention is computed using the beta-band frequencies of neuronal signals. Next, an aggregated attention curve is generated using an intra- and inter-modality fusion mechanism. Finally, the affective content in each video shot is extracted. The fusion of multimedia and neuronal signals provides a bridge that links the digital representation of multimedia with the viewer’s perceptions. Our experimental results indicate that the proposed shot-detection based divide-and-conquer strategy mitigates the time and computational complexity. Moreover, the proposed attention model provides an accurate reflection of the user preferences and facilitates the extraction of highly affective and personalized summaries.
    • Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors

      Mehmood, Irfan; Sajjad, M.; Baik, S.W. (2014-09)
      Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.
    • Saliency-directed prioritization of visual data in wireless surveillance networks

      Mehmood, Irfan; Sajjad, M.; Ejaz, W.; Baik, S.W. (2015-07)
      In wireless visual sensor networks (WVSNs), streaming all imaging data is impractical due to resource constraints. Moreover, the sheer volume of surveillance videos inhibits the ability of analysts to extract actionable intelligence. In this work, an energy-efficient image prioritization framework is presented to cope with the fragility of traditional WVSNs. The proposed framework selects semantically relevant information before it is transmitted to a sink node. This is based on salient motion detection, which works on the principle of human cognitive processes. Each camera node estimates the background by a bootstrapping procedure, thus increasing the efficiency of salient motion detection. Based on the salient motion, each sensor node is classified as being high or low priority. This classification is dynamic, such that camera nodes toggle between high-priority and low-priority status depending on the coverage of the region of interest. High-priority camera nodes are allowed to access reliable radio channels to ensure the timely and reliable transmission of data. We compare the performance of this framework with other state-of-the-art methods for both single and multi-camera monitoring. The results demonstrate the usefulness of the proposed method in terms of salient event coverage and reduced computational and transmission costs, as well as in helping analysts find semantically relevant visual information.
    • A note on intelligent exploration of semantic data

      Thakker, Dhaval; Schwabe, D.; Garcia, D.; Kozaki, K.; Brambilla, M.; Dimitrova, V. (2019-04)
      Welcome to this special issue of the Semantic Web (SWJ) journal. The special issue compiles three technical contributions that significantly advance the state-of-the-art in exploration of semantic data using semantic web techniques and technologies.
    • Using deep learning for IoT-enabled smart camera: a use case of flood monitoring

      Mishra, Bhupesh K.; Thakker, Dhaval; Mazumdar, S.; Simpson, S.; Neagu, Daniel (IEEE, 2019-07)
      In recent years, deep learning has been increasingly used for several applications such as object analysis, feature extraction and image classification. This paper explores the use of deep learning in a flood monitoring application in the context of an EC-funded project, Smart Cities and Open Data REuse (SCORE). IoT sensors for detecting blocked gullies and drainages are notoriously hard to build, hence we propose a novel technique to utilise deep learning for building an IoT-enabled smart camera to address this need. In our work, we apply deep leaning to classify drain blockage images to develop an effective image classification model for different severity of blockages. Using this model, an image can be analysed and classified in number of classes depending upon the context of the image. In building such model, we explored the use of filtering in terms of segmentation as one of the approaches to increase the accuracy of classification by concentrating only into the area of interest within the image. Segmentation is applied in data pre-processing stage in our application before the training. We used crowdsourced publicly available images to train and test our model. Our model with segmentation showed an improvement in the classification accuracy.
    • Ontology-based discovery of time-series data sources for landslide early warning system

      Phengsuwan, J.; Shah, T.; James, P.; Thakker, Dhaval; Barr, S.; Ranjan, R. (2019)
      Modern early warning system (EWS) requires sophisticated knowledge of the natural hazards, the urban context and underlying risk factors to enable dynamic and timely decision making (e.g., hazard detection, hazard preparedness). Landslides are a common form of natural hazard with a global impact and closely linked to a variety of other hazards. EWS for landslides prediction and detection relies on scientific methods and models which requires input from the time series data, such as the earth observation (EO) and urban environment data. Such data sets are produced by a variety of remote sensing satellites and Internet of things sensors which are deployed in the landslide prone areas. To this end, the automatic discovery of potential time series data sources has become a challenge due to the complexity and high variety of data sources. To solve this hard research problem, in this paper, we propose a novel ontology, namely Landslip Ontology, to provide the knowledge base that establishes relationship between landslide hazard and EO and urban data sources. The purpose of Landslip Ontology is to facilitate time series data source discovery for the verification and prediction of landslide hazards. The ontology is evaluated based on scenarios and competency questions to verify the coverage and consistency. Moreover, the ontology can also be used to realize the implementation of data sources discovery system which is an essential component in EWS that needs to manage (store, search, process) rich information from heterogeneous data sources.
    • Flexural toughness and calculation model of super-fine stainless wire reinforced reactive powder concrete

      Dong, S.; Zhou, D.; Ashour, Ashraf F.; Han, B.; Ou, J. (2019-11)
      As a type of excellent reinforcing filler, super-fine stainless wire (SSW) can form widely distributed network in reactive powder concrete (RPC) to transfer crack tip stresses as well as inhibit the initiation and propagation of cracks, leading to significant improvement of flexural toughness of RPC. In this paper, the flexural toughness of RPC beams and plates reinforced with 1% and 1.5% by vol. of SSWs was investigated, and its calculation model was established according to the composite material theory. Experimental results showed that the flexural toughness of unnotched beams fabricated with RPC containing 1.5% SSWs is 146.5% higher than that of control RPC without SSWs according to load-deflection relationships. The equivalent flexural strength of notched RPC beams is enhanced by 80.0% as SSW content increases from 1% to 1.5%. The limitation ability of SSWs on crack mouth opening can be used to evaluate the flexural toughness of composites. An addition of 1.5% SSWs leads to 201.9% increase of flexural toughness of RPC plates in accordance with load-deflection relationships. The calculation model based on the composite material theory can accurately describe the toughening effect of SSWs on RPC beams and plates. The enhancement of flexural toughness of RPC caused by SSWs is beneficial for improving the safety of structures as well as broadening the engineering applications of composites.
    • Similarity hash based scoring of portable executable files for efficient malware detection in IoT

      Namanya, Anitta P.; Awan, Irfan U.; Disso, J.P.; Younas, M. (Elsevier, 2019)
      The current rise in malicious attacks shows that existing security systems are bypassed by malicious files. Similarity hashing has been adopted for sample triaging in malware analysis and detection. File similarity is used to cluster malware into families such that their common signature can be designed. This paper explores four hash types currently used in malware analysis for portable executable (PE) files. Although each hashing technique produces interesting results, when applied independently, they have high false detection rates. This paper investigates into a central issue of how different hashing techniques can be combined to provide a quantitative malware score and to achieve better detection rates. We design and develop a novel approach for malware scoring based on the hashes results. The proposed approach is evaluated through a number of experiments. Evaluation clearly demonstrates a significant improvement (> 90%) in true detection rates of malware.
    • Long term evolution of the surface refractivity for arctic regions

      Bettouche, Y.; Kouki, A.; Agba, B.; Obeidat, H.; Alhassan, H.; Rodriguez, J.; Abd-Alhameed, Raed A.; Jones, Steven M.R. (2019)
      In this paper, local meteorological data for a period of 35 years (from 1979 to 2013) from Kuujuaq station have been used to calculate the surface refractivity, N and to estimate the vertical refractivity gradient, dN1, in the lowest atmospheric layer above the ground. Monthly and yearly variations of the mean of N and dN1 are provided. The values obtained are compared with the corresponding values from the ITU maps. The long-term trend of the surface refractivity is also investigated. The data demonstrate that the indices N and dN1 are subject to an evolution which may have significance in the context of climate change (CC). Monthly means of N show an increasing departure from ITU-R values since 1990. Yearly mean values of the dN1 show a progressive decrease over the period of study. Seasonal means of dN1 show a decrease over time, especially for summer. Such a trend may increase the occurrence of super-refraction. However, currently available ITU-R recommendations for microwave link design assume a stationary climate, so there is a need for a new modelling approach.
    • Laboratory experimental study of ocean waves propagating over a partially buried pipeline in a trench layer

      Sun, K.; Zhang, J.; Gao, Y.; Jeng, D.; Guo, Yakun; Liang, Z. (2019-02-01)
      Seabed instability around a pipeline is one of the primary concerns in offshore pipeline projects. To date, most studies focus on investigating the wave/current-induced response within a porous seabed around either a fully buried pipeline or a thoroughly exposed one. In this study, unlike previous investigations, a series of comprehensive laboratory experiments are carried out in a wave flume to investigate the wave-induced pore pressures around a partially embedded pipeline in a trench layer. Measurements show that the presence of the partially buried pipeline can significantly affect the excess pore pressure in a partially backfilled trench layer, which deviates considerably from that predicted by the theoretical approach. The morphology of the trench layer accompanied with the backfill sediments, especially the deeper trench and thicker backfill (i.e.,b≥1D,e≥0.5D), provides a certain degree of resistance to seabed instability. The amplitude of excess pore pressure around the trench layer roughly exhibits a left-right asymmetric distribution along the periphery of the pipeline, and decays sharply from the upper layer of the trench to the lower region. Deeper trench depth and thicker buried layer significantly weaken the pore-water pressures in the whole trench area, thus sheltering and protecting the submarine pipeline against the transient seabed liquefaction.
    • Investigation on scour scale of piggyback pipeline under wave conditions

      Yang, S.; Shi, B.; Guo, Yakun (2019-06-15)
      Laboratory experiments are presented to investigate the effect of different piggyback pipeline configurations on the morphology of local scour under wave conditions. Scour depth and width around the pipelines under regular and irregular waves are measured and analyzed for a range of pipeline and wave conditions; such as the spacing between two pipes (G), gap between the main pipe and seabed (e), pipe diameter (D), wave height (H) and period (T). Experimental results reveal that both the scour depth and width around piggyback pipeline is much larger than those around single pipe under the same wave conditions. Scour depth increases with the increase of the Keulegan-Carpenter (KC) number and decreases with increase of G and e. When e exceeds 0.5D, scour depth tends to approach 0.When spacing G is greater than 0.4D, the destabilization from small pipe to large one is greatly reduced, resulting in scour depth around piggyback pipeline being close to that around single pipe. Similar to scour depth, scour width broadens with the increase of KC number increasing and decreases with the increase of G. Experiments also show that the effect of e on scour depth is greater than that of G under the same test conditions, while their impact on scour width is opposite. Furthermore, scour width under irregular waves is extended slightly compared with regular wave for otherwise the identical conditions.
    • Investigation on scour protection of submarine piggyback pipeline

      Yang, S.; Shi, B.; Guo, Yakun; Yang, L. (2019-06-15)
      This paper presents the results of laboratory experiments and numerical simulations to investigate the effect of different piggyback pipeline configuration on the morphology of local seabed scour subject to steady currents. Piggyback pipeline configuration investigated includes the commonly used piggyback pipeline, namely a small pipe attached on the top of large pipe and new form of piggyback pipeline proposed in this study in which a small pipe is attached to the large pipe on the upstream and downstream side, respectively. Pressure gradient, drag coefficient, lift coefficient and scour extent around pipelines are measured and analyzed for a range of pipelines and current conditions. Results show that the vortex strength downstream of the commonly used piggyback pipeline is larger than that for a single as well as the new piggyback pipeline under the same condition. This new type piggyback pipeline can effectively reduce the depth and width of the scour hole. In particular, when the ratio of the small pipe diameter over the large pipe diameter is greater than 0.3, little scour under this new type piggyback pipeline occurs for the test conditions. The bed topography downstream of the pipe has also been altered to favor the backfill.
    • Numerical and experimental analysis of shallow turbulent flow over complex roughness beds

      Zhang, Y.; Rubinato, M.; Kazemi, E.; Pu, Jaan H.; Huang, Y.; Lin, P. (2019)
      A set of shallow-water equations (SWEs) based on a k-epsilon Reynold stress model is established to simulate the turbulent flows over a complex roughness bed. The fundamental equations are discretized by the second-order finite-difference method (FDM), in which spatial and temporal discretization are conducted by staggered-grid and leap-frog schemes, respectively. The turbulent model in this study stems from the standard k-epsilon model, but is enhanced by replacing the conventional vertical production with a more rigorous and precise generation derived from the energy spectrum and turbulence scales. To verify its effectiveness, the model is applied to compute the turbulence in complex flow surroundings (including a rough bed) in an abrupt bend and in a natural waterway. The comparison of the model results against experimental data and other numerical results shows the robustness and accuracy of the present model in describing hydrodynamic characteristics, especially turbulence features on the complex roughness bottom.