Now showing items 1-20 of 1997

    • Analytical solution of shallow water equations for ideal dam-break flood along a wet bed slope

      Wang, B.; Chen, Y.; Peng, Y.; Zhang, J.; Guo, Yakun (2019)
      The existing analytical solutions of dam-break flow do not consider simultaneously the effects of wet downstream bottom and bed slope on the dam-break wave propagation. In this study, a new analytical solution for the shallow-water equations (SWE) is developed to remove this limitation to simulate the wave caused by an instantaneous dam-break. The approach adopts the method of characteristics and has been applied to simulate the dam-break flows with different downstream water depths and slopes. The analytical solutions have been compared with predictions by the lattice Boltzmann method and the agreement is good. Although the proposed analytical solution treats an idealized case, it is nonetheless suitable for assessing the robustness and accuracy of numerical models based on the SWE without the frictional slope.
    • Predicting the vertical low suspended sediment concentration in vegetated flow using a random displacement model

      Huai, W.; Yang, L.; Wang, W-J.; Guo, Yakun; Wang, T.; Cheng, Y. (2019-11)
      Based on the Lagrangian approach, this study proposes a random displacement model (RDM) to predict the concentration of suspended sediment in vegetated steady open channel flow. Validation of the method was conducted by comparing the simulated results by using the RDM with available experimental measurements for uniform open-channel flows. The method is further validated with the classical Rouse formula. To simulate the important vertical dispersion caused by vegetation in the sediment-laden open channel flow, a new integrated sediment diffusion coefficient is introduced in this study, which is equal to a coefficient multiplying the turbulent diffusion coefficient. As such, the RDM approach for sandy flow with vegetation was established for predicting the suspended sediment concentration in low-sediment-concentration flow with both the emergent and submerged vegetation. The study shows that the value of for submerged vegetation flow is larger than that for emergent vegetation flow. The simulated result using the RDM is in good agreement with the available experimental data, indicating that the proposed sediment diffusion coefficient model can be accurately used to investigate the sediment concentration in vegetated steady open channel flow.
    • Process Simulation of Impurity Impacts on CO2 Fluids Flowing in Pipelines

      Peletiri, Suoton P.; Mujtaba, Iqbal M.; Rahmanian, Nejat (2019-12-10)
      Captured carbon dioxide flowing in pipelines is impure. The impurities contained in the carbon dioxide fluid impact on the properties of the fluid. The impact of each impurity has not been adequately studied and fully understood. In this study, binary mixtures containing carbon dioxide and one impurity, at the maximum permitted concentration, flowing in pipelines are studied to understand their impact on pipeline performance. A hypothetical 70 km uninsulated pipeline is assumed and simulated using Aspen HYSYS (v.10) and gPROMS (v.5.1.1). The mass flow rate is 2,200,600 kg/h; the internal and external diameters are 0.711 m and 0.785 m. 15 MPa and 9 MPa were assumed as inlet and minimum pressures and 33 oC as the inlet temperature, to ensure that the fluid remain in the dense (subcritical or supercritical) phase. Each binary fluid is studied at the maximum allowable concentration and deviations from pure carbon dioxide at the same conditions is determined. These deviations were graded to rank the impurities in order of the degree of impact on each parameter. All impurities had at least one negative impact on carbon dioxide fluid flow. Nitrogen with the highest concentration (10-mol %) had the worst impact on pressure loss (in horizontal pipeline), density, and critical pressure. Hydrogen sulphide (with 1.5-mol %) had the least impact, hardly changing the thermodynamic properties of pure carbon dioxide.
    • Rheology of waxy crude oils in relation to restart of gelled pipelines

      Fakroun, A.; Benkreira, Hadj (Elsevier Ltd, 2019-09)
      Waxy crude oils are pumped hot but upon power cut, pumping stops, the oil cools leading below the wax appearance temperature to precipitation of the wax and the formation of a gel throughout the pipe. In such a situation, what is the minimum pressure required to restart flow, not to merely deform the gel or break it? This paper provides a solution to this problem using microscopic observations under controlled cooling conditions and rheological data conducted in constant stress mode under controlled temperature and cooling conditions and restart experiments in laboratory pipelines replicating the rheometric conditions and deviations from them to inform large diameter operation in the field. Three important findings derive from the experimental data collected: (i) A fragmentation stress , rather than the static stress that precedes it, is found to be the more accurate predictor of flow re-start pressures; (ii) Waxy crude oils gels exhibit true yield stress and yielding process but also show flow on application of the slightest stress below yielding; (iii) This flow, in the elastic region, is jagged rather than continuous suggesting a consolidation process of the crystals and their agglomerates forming the gel. In the broader context of the existence of a yield stress, the data presented here show that there is such a thing as a yield stress and the concepts of a yield stress and that everything flows are not mutually exclusive.
    • In situ fluorescence measurements of dissolved organic matter: a review

      Carstea, E.M.; Popa, C.L.; Baker, A.; Bridgeman, John (2019)
      There is a need for an inexpensive, reliable and fast monitoring tool to detect contaminants in a short time, for quick mitigation of pollution sources and site remediation, and for characterization of natural dissolved organic matter (DOM). Fluorescence spectroscopy has proven to be an excellent technique in quantifying aquatic DOM, from autochthonous, allochthonous or anthropogenic sources. This paper reviews the advances in in situ fluorescence measurements of DOM and pollutants in various water environments. Studies have demonstrated, using high temporal-frequency DOM fluorescence data, that marine autochthonous production of DOM is highly complex and that the allochthonous input of DOM from freshwater to marine water can be predicted. Furthermore, river measurement studies found a delayed fluorescence response of DOM following precipitation compared to turbidity and discharge, with various lags, depending on season, site and input of dissolved organic carbon (DOC) concentration. In addition, research has shown that blue light fluorescence (λemission = 430–500 nm) can be a good proxy for DOC, in environments with terrestrial inputs, and ultraviolet fluorescence (λemission = UVA–320–400 nm) for biochemical oxygen demand, and also E. coli in environments with sanitation issues. The correction of raw fluorescence data improves the relationship between fluorescence intensity and these parameters. This review also presents the specific steps and parameters that must be considered before and during in situ fluorescence measurement session for a harmonized qualitative and quantitative protocol. Finally, the strengths and weaknesses of the research on in situ fluorescence are identified.
    • Identifying molecular mass of coagulant protein from edible Hibiscus seeds using SDS-PAGE analysis

      Jones, Alfred N.; Bridgeman, John (2019-11)
      This study used sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) analysis and a jar test apparatus to investigate the molecular weight (MW) and turbidity removal potential of Hibiscus seeds. Three Hibiscus species were assessed: okra crude extract (OCE), sabdariffa crude extract (SCE), and kenaf crude extract (KCE). Furthermore, purified versions of each [i.e., purified okra protein (POP), purified sabdariffa protein (PSP), and purified kenaf protein (PKP)] obtained from anionic exchange were evaluated. The results show that whereas the crude extracts had multiple proteins with MW sizes between 11 and 82 kDa, the purified samples consisted of a single coagulant protein band around 39 kDa. In each case, significant turbidity removal was recorded with the purified proteins; POP, PSP and PKP achieved approximately 98%, 94%, and 90% removal, respectively, at a reduced dosage of ≤0.6  mg/L. However, OCE and SCE achieved lower turbidity removal of 86% and 85% using 40-mg/L doses, respectively, whereas KCE recorded only 73% turbidity removal with a 60-mg/L dose. Sludge generation by crude and purified proteins was approximately 25% of sludge produced by aluminum sulfate and had the additional benefit of being biodegradable. Therefore, the coagulant protein in Hibiscus plant seeds has potential applications for improvements to accessing clean water in developing countries.
    • Bond between glass fibre reinforced polymer bars and high - strength concrete

      Saleh, N.; Ashour, Ashraf F.; Sheehan, Therese (ElSevier, 2019-12)
      In this study, bond properties of glass fibre reinforced polymer (GFRP) bars embedded in high-strength concrete (HSC) were experimentally investigated using a pull-out test. The experimental program consisted of testing 84 pull-out specimens prepared according to ACI 440.3R-12 standard. The testing of the specimens was carried out considering bar diameter (9.5, 12.7 and 15.9 mm), embedment length (2.5, 5, 7.5 and 10 times bar diameter) and surface configuration (helical wrapping with slight sand coating (HW-SC) and sand coating (SC)) as the main parameters. Twelve pull-out specimens reinforced with 16 mm steel bar were also tested for comparison purposes. Most of the specimens failed by a pull-out mode. Visual inspection of the tested specimens reinforced with GFRP (HW-SC) bars showed that the pull-out failure was due to the damage of outer bar surface, whilst the detachment of the sand coating was responsible for the bond failure of GFRP (SC) reinforced specimens. The bond stress – slip behaviour of GFRP (HW-SC) bars is different from that of GFRP (SC) bars and it was also found that GFRP (SC) bars gave a better bond performance than GFRP (HW-SC) bars. It was observed that the reduction rate of bond strength of both GFRP types with increasing the bar diameter and the embedment length was reduced in the case of high-strength concrete. Bond strength predictions obtained from ACI-440.1R, CSAeS806, CSA-S6 and JSCE design codes were compared with the experimental results. Overall, all design guidelines were conservative in predicting bond strength of both GFRP bars in HSC and ACI predictions were closer to the tested results than other codes.
    • Modelling the mechanical and strain recovery behaviour of partially crystalline PLA

      Sweeney, John; Spencer, Paul E.; Nair, Karthik Jayan; Coates, Philip D. (2019-08-13)
      This is a study of the modelling and prediction of strain recovery in a polylactide. Strain recovery near the glass transition temperature is the underlying mechanism for the shape memory in an amorphous polymer. The investigation is aimed at modelling such shape memory behaviour. A PLA-based copolymer is subjected to stress–strain, stress relaxation and strain recovery experiments at large strain at 60 C just below its glass transition temperature. The material is 13% crystalline. Using published data on the mechanical properties of the crystals, finite element modelling was used to determine the effect of the crystal phase on the overall mechanical behaviour of the material, which was found to be significant. The finite element models were also used to relate the stress–strain results to the yield stress of the amorphous phase. This yield stress was found to possess strain rate dependence consistent with an Eyring process. Stress relaxation experiments were also interpreted in terms of the Eyring process, and a two-process Eyring-based model was defined that was capable of modelling strain recovery behaviour. This was essentially a model of the amorphous phase. It was shown to be capable of useful predictions of strain recovery.
    • Nanoparticle enhanced eutectic reaction during diffusion brazing of aluminium to magnesium

      Akhtar, T.S.; Cooke, Kavian O.; Khan, Tahir I.; Shar, M.S. (2019-03)
      Diffusion brazing has gained much popularity as a technique capable of joining dissimilar lightweight metal alloys and has the potential for a wide range of applications in aerospace and transportation industries, where microstructural changes that will determine the mechanical and chemical properties of the final joint must be controlled. This study explores the effect of Al2O3 nanoparticles on the mechanical and microstructural properties of diffusion brazed magnesium (AZ31) and aluminium (Al-1100) joints. The results showed that the addition of Al2O3 nanoparticle to the electrodeposited Cu coating increased the volume of eutectic liquid formed at the interface which caused a change to the bonding mechanism and accelerated the bonding process. When the Cu/Al2O3 nanocomposite coatings were used as the interlayer, a maximum bond strength of 46 MPa was achieved after 2 min bonding time while samples bonded using pure-Cu interlayers achieved maximum strength after 10 min bonding time. Chemical analysis of the bond region confirmed that when short bonding times are used, the intermetallic compounds formed at the interface are limited to the compounds consumed in the eutectic reaction.
    • Controlling the material removal and roughness of Inconel 718 in laser machining

      Ahmed, N.; Rafaqat, M.; Pervaiz, S.; Umer, U.; Alkhalefa, H.; Baloch, Muhammad A.S.; Mian, S.H. (2019-05-16)
      Nickel alloys including Inconel 718 are considered as challenging materials for machining. Laser beam machining could be a promising choice to deal with such materials for simple to complex machining features. The machining accuracy is mainly dependent on the rate of material removal per laser scan. Because of the involvement of many laser parameters and complexity of the machining mechanism it is not always simple to achieve machining with desired accuracy. Actual machining depth extremely varies from very low to aggressively high values with reference to the designed depth. Thus, a research is needed to be carried out to control the process parameters to get actual material removal rate (MRRact) equals to the theoretical material removal rate (MRRth) with minimum surface roughness (SR) of the machined surfaces. In this study, five important laser parameters have been used to investigate their effects on MRR and SR. Statistical analysis are performed to identify the significant parameters with their strength of effects. Mathematical models have been developed and validated to predict the machining responses. Optimal set of laser parameters have also been proposed and confirmed to achieve the actual MRR close to the designed MRR (MRR% = 100.1%) with minimum surface roughness (Ra = 2.67 µm).
    • Performance evaluation of metamodelling methods for engineering problems: towards a practitioner guide

      Kianifar, Mohammed R.; Campean, I. Felician (Springer, 2019-07)
      Metamodelling or surrogate modelling techniques are frequently used across the engineering disciplines in conjunction with expensive simulation models or physical experiments. With the proliferation of metamodeling techniques developed to provide enhanced performance for specific problems, and the wide availability of a diverse choice of tools in engineering software packages, the engineering task of selecting a robust metamodeling technique for practical problems is still a challenge. This research introduces a framework for describing the typology of engineering problems, in terms of dimensionality and complexity, and the modelling conditions, reflecting the noisiness of the signals and the affordability of sample sizes, and on this basis presents a systematic evaluation of the performance of frequently used metamodeling techniques. A set of metamodeling techniques, selected based on their reported use for engineering problems (i.e. Polynomial, Radial Basis Function, and Kriging), were systematically evaluated in terms of accuracy and robustness against a carefully assembled set of 18 test functions covering different types of problems, sampling conditions and noise conditions. A set of four real-world engineering case studies covering both computer simulation and physical experiments were also analysed as validation tests for the proposed guidelines. The main conclusions drawn from the study are that Kriging model with Matérn 5/2 correlation function performs consistently well across different problem types with smooth (i.e. not noisy) data, while Kriging model with Matérn 3/2 correlation function provides robust performance under noisy conditions, except for the very high noise conditions, where the Kriging model with nugget appears to provide better models. These results provide engineering practitioners with a guide for the choice of a metamodeling technique for problem types and modelling conditions represented in the study, whereas the evaluation framework and benchmarking problems set will be useful for researchers conducting similar studies.
    • Automatic modulation classification using interacting multiple model - Kalman filter for channel estimation

      Abdul Salam, Ahmed O.; Sheriff, Ray E.; Hu, Yim Fun; Al-Araji, S.R.; Mezher, K. (Institute of Electrical and Electronic Engineers, 2019-09)
      A rigorous model for automatic modulation classification (AMC) in cognitive radio (CR) systems is proposed in this paper. This is achieved by exploiting the Kalman filter (KF) integrated with an adaptive interacting multiple model (IMM) for resilient estimation of the channel state information (CSI). A novel approach is proposed, in adding up the squareroot singular values (SRSV) of the decomposed channel using the singular value decompositions (SVD) algorithm. This new scheme, termed Frobenius eigenmode transmission (FET), is chiefly intended to maintain the total power of all individual effective eigenmodes, as opposed to keeping only the dominant one. The analysis is applied over multiple-input multiple-output (MIMO) antennas in combination with a Rayleigh fading channel using a quasi likelihood ratio test (QLRT) algorithm for AMC. The expectation-maximization (EM) is employed for recursive computation of the underlying estimation and classification algorithms. Novel simulations demonstrate the advantages of the combined IMM-KF structure when compared to the perfectly known channel and maximum likelihood estimate (MLE), in terms of achieving the targeted optimal performance with the desirable benefit of less computational complexity loads.
    • A comparison of flare forecasting methods. II. Benchmarks, metrics and performance results for operational solar flare forecasting systems

      Leka, K.D.; Park, S-H.; Kusano, K.; Andries, J.; Barnes, G.; Bingham, S.; Bloomfield, D.S.; McCloskey, A.E.; Delouille, V.; Falconer, D.; et al. (2019-08)
      Solar flares are extremely energetic phenomena in our Solar System. Their impulsive, often drastic radiative increases, in particular at short wavelengths, bring immediate impacts that motivate solar physics and space weather research to understand solar flares to the point of being able to forecast them. As data and algorithms improve dramatically, questions must be asked concerning how well the forecasting performs; crucially, we must ask how to rigorously measure performance in order to critically gauge any improvements. Building upon earlier-developed methodology (Barnes et al. 2016, Paper I), international representatives of regional warning centers and research facilities assembled in 2017 at the Institute for Space-Earth Environmental Research, Nagoya University, Japan to – for the first time – directly compare the performance of operational solar flare forecasting methods. Multiple quantitative evaluation metrics are employed, with focus and discussion on evaluation methodologies given the restrictions of operational forecasting. Numerous methods performed consistently above the “no skill” level, although which method scored top marks is decisively a function of flare event definition and the metric used; there was no single winner. Following in this paper series we ask why the performances differ by examining implementation details (Leka et al. 2019, Paper III), and then we present a novel analysis method to evaluate temporal patterns of forecasting errors in (Park et al. 2019, Paper IV). With these works, this team presents a well-defined and robust methodology for evaluating solar flare forecasting methods in both research and operational frameworks, and today’s performance benchmarks against which improvements and new methods may be compared.
    • A comparison of flare forecasting methods. III. Systematic behaviors of operational solar flare forecasting systems

      Leka, K.D.; Park, S-H.; Kusano, K.; Andries, J.; Barnes, G.; Bingham, S.; Bloomfield, D.S.; McCloskey, A.E.; Delouille, V.; Falcomer, D.; et al. (2019-08)
      A workshop was recently held at Nagoya University (31 October – 02 November 2017), sponsored by the Center for International Collaborative Research, at the Institute for Space-Earth Environmental Research, Nagoya University, Japan, to quantitatively compare the performance of today’s operational solar flare forecasting facilities. Building upon Paper I of this series (Barnes et al. 2016), in Paper II (Leka et al. 2019) we described the participating methods for this latest comparison effort, the evaluation methodology, and presented quantitative comparisons. In this paper we focus on the behavior and performance of the methods when evaluated in the context of broad implementation differences. Acknowledging the short testing interval available and the small number of methods available, we do find that forecast performance: 1) appears to improve by including persistence or prior flare activity, region evolution, and a human “forecaster in the loop”; 2) is hurt by restricting data to disk-center observations; 3) may benefit from long-term statistics, but mostly when then combined with modern data sources and statistical approaches. These trends are arguably weak and must be viewed with numerous caveats, as discussed both here and in Paper II. Following this present work, we present in Paper IV a novel analysis method to evaluate temporal patterns of forecasting errors of both types (i.e., misses and false alarms; Park et al. 2019). Hence, most importantly, with this series of papers we demonstrate the techniques for facilitating comparisons in the interest of establishing performance-positive methodologies.
    • Saliency-weighted graphs for efficient visual content description and their applications in real-time image retrieval systems

      Ahmad, J.; Sajjad, M.; Mehmood, Irfan; Rho, S.; Baik, S.W. (2017-09)
      The exponential growth in the volume of digital image databases is making it increasingly difficult to retrieve relevant information from them. Efficient retrieval systems require distinctive features extracted from visually rich contents, represented semantically in a human perception-oriented manner. This paper presents an efficient framework to model image contents as an undirected attributed relational graph, exploiting color, texture, layout, and saliency information. The proposed method encodes salient features into this rich representative model without requiring any segmentation or clustering procedures, reducing the computational complexity. In addition, an efficient graph-matching procedure implemented on specialized hardware makes it more suitable for real-time retrieval applications. The proposed framework has been tested on three publicly available datasets, and the results prove its superiority in terms of both effectiveness and efficiency in comparison with other state-of-the-art schemes.
    • UAV based wilt detection system via convolutional neural networks

      Dang, L.M.; Hassan, S.I.; Suhyeon, I.; Sangaiah, A.K.; Mehmood, Irfan; Rho, S.; Seo, S.; Moon, H. (2019)
      The significant role of plants can be observed through the dependency of animals and humans on them. Oxygen, materials, food and the beauty of the world are contributed by plants. Climate change, the decrease in pollinators, and plant diseases are causing a significant decline in both quality and coverage ratio of the plants and crops on a global scale. In developed countries, above 80 percent of rural production is produced by sharecropping. However, due to widespread diseases in plants, yields are reported to have declined by more than a half. These diseases are identified and diagnosed by the agricultural and forestry department. Manual inspection on a large area of fields requires a huge amount of time and effort, thereby reduces the effectiveness significantly. To counter this problem, we propose an automatic disease detection and classification method in radish fields by using a camera attached to an unmanned aerial vehicle (UAV) to capture high quality images from the fields and analyze them by extracting both color and texture features, then we used K-means clustering to filter radish regions and feeds them into a fine-tuned GoogleNet to detect Fusarium wilt of radish efficiently at early stage and allow the authorities to take timely action which ensures the food safety for current and future generations.
    • Divide-and-conquer based summarization framework for extracting affective video content

      Mehmood, Irfan; Sajjad, M.; Rho, S.; Baik, S.W. (2016-01)
      Recent advances in multimedia technology have led to tremendous increases in the available volume of video data, thereby creating a major requirement for efficient systems to manage such huge data volumes. Video summarization is one of the key techniques for accessing and managing large video libraries. Video summarization can be used to extract the affective contents of a video sequence to generate a concise representation of its content. Human attention models are an efficient means of affective content extraction. Existing visual attention driven summarization frameworks have high computational cost and memory requirements, as well as a lack of efficiency in accurately perceiving human attention. To cope with these issues, we propose a divide-and-conquer based framework for an efficient summarization of big video data. We divide the original video data into shots, where an attention model is computed from each shot in parallel. Viewer's attention is based on multiple sensory perceptions, i.e., aural and visual, as well as the viewer's neuronal signals. The aural attention model is based on the Teager energy, instant amplitude, and instant frequency, whereas the visual attention model employs multi-scale contrast and motion intensity. Moreover, the neuronal attention is computed using the beta-band frequencies of neuronal signals. Next, an aggregated attention curve is generated using an intra- and inter-modality fusion mechanism. Finally, the affective content in each video shot is extracted. The fusion of multimedia and neuronal signals provides a bridge that links the digital representation of multimedia with the viewer’s perceptions. Our experimental results indicate that the proposed shot-detection based divide-and-conquer strategy mitigates the time and computational complexity. Moreover, the proposed attention model provides an accurate reflection of the user preferences and facilitates the extraction of highly affective and personalized summaries.
    • Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors

      Mehmood, Irfan; Sajjad, M.; Baik, S.W. (2014-09)
      Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.
    • Saliency-directed prioritization of visual data in wireless surveillance networks

      Mehmood, Irfan; Sajjad, M.; Ejaz, W.; Baik, S.W. (2015-07)
      In wireless visual sensor networks (WVSNs), streaming all imaging data is impractical due to resource constraints. Moreover, the sheer volume of surveillance videos inhibits the ability of analysts to extract actionable intelligence. In this work, an energy-efficient image prioritization framework is presented to cope with the fragility of traditional WVSNs. The proposed framework selects semantically relevant information before it is transmitted to a sink node. This is based on salient motion detection, which works on the principle of human cognitive processes. Each camera node estimates the background by a bootstrapping procedure, thus increasing the efficiency of salient motion detection. Based on the salient motion, each sensor node is classified as being high or low priority. This classification is dynamic, such that camera nodes toggle between high-priority and low-priority status depending on the coverage of the region of interest. High-priority camera nodes are allowed to access reliable radio channels to ensure the timely and reliable transmission of data. We compare the performance of this framework with other state-of-the-art methods for both single and multi-camera monitoring. The results demonstrate the usefulness of the proposed method in terms of salient event coverage and reduced computational and transmission costs, as well as in helping analysts find semantically relevant visual information.
    • A note on intelligent exploration of semantic data

      Thakker, Dhaval; Schwabe, D.; Garcia, D.; Kozaki, K.; Brambilla, M.; Dimitrova, V. (2019-04)
      Welcome to this special issue of the Semantic Web (SWJ) journal. The special issue compiles three technical contributions that significantly advance the state-of-the-art in exploration of semantic data using semantic web techniques and technologies.