Engineering and Digital Technology
Browse by
All items in Bradford Scholars are protected by copyright, with all rights reserved.
Collections in this community
Recent Submissions
-
Measuring gaze angle changes to maintain fixation upon a small target during motion: 3D motion tracking versus wearable eye-trackerRecently we demonstrated how changes in gaze angle can be determined without an eye-tracker. The approach uses 3D motion-capture, to track the viewed target in the head’s reference frame and assumes head or target movement causes a gaze-angle change. This study determined the validity of this “assumed-gaze” method. Participants read information presented on a smartphone whilst walking. Changes in gaze angles were simultaneously assessed with an eye-tracker and our assumed-gaze method. The spatial and temporal agreement of the assumed-gaze approach with the eye-tracker were ~1deg and ~0.02s, respectively, and spatial congruence indicated the direction of changes in the assumed-gaze angle were in accordance with those determined with the eye tracker for ~81% of the time. Findings indicate that when the head is moving and gaze is continually directed to a small target, our assumed-gaze approach can determine changes in gaze angle with comparable precision to a wearable eye-tracker
-
FSO-based HAP-assisted multi-UAV backhauling over F channels with imperfect CSINon-terrestrial Network (NTN), utilizing highaltitude platforms (HAP)-based free-space optical (FSO) backhaul and unmanned aerial vehicles (UAV) for last-mile access, is a feasible and promising architecture to achieve high data rate and seamless network coverage in the future 6G era. Effective resource allocation emerges as a pivotal concern for such networks. This paper addresses the data allocation issue for FSO backhaul from the HAP to multiple UAV-mounted base stations (BSs) under the constraints of ground users’ requested data rates. We introduce frame allocation schemes (FAS), including rate adaptation with constraints (RAC)- and rate/power adaptation (RPA)-aided FAS. The key idea of these schemes is to allocate data frames effectively based on UAV’s turbulence channel conditions, which aims to (i) guarantee the quality of services (QoS), (ii) retain both latency and throughput fairness, and (iii) minimize the transmitted power. Furthermore, the performance of these schemes is also analyzed under the impact of imperfect channel state information (CSI). We newly derive the channel probability density function (PDF) and the cumulative density function (CDF), considering the imperfect CSI due to channel estimation and quantization errors. Capitalizing on the derived PDF and CDF, different performance metrics are analytically obtained, incorporating combined effects of cloud coverage, transceiver misalignment, Fisher-Snedecor F turbulence, and angle-of-arrival (AoA) fluctuations. Numerical results demonstrate the effectiveness of our design proposals over the state-of-the-art. Finally, Monte Carlo simulations are employed to validate the analysis.
-
Bio-energy programs in EuropeBio-energy availability, its potential and production, challenges, and opportunities in Europe are addressed in this chapter. In addition, a comprehensive comparison of different generations of biofuels, types of bioenergy resource availability for various EU countries, and the prediction of the supply rate has been explained. This book chapter helps us understand the progress of consumable bioenergy resources and expected future trends, which is highly important to know. The basis of this chapter is on the agreements in the European energy institutes and related policies. Furthermore, the potential resources of bio-energy in Europe, environmental impacts, and the challenges and barriers that may be faced in the present and future have been investigated.
-
Process simulation of fluidized bed granulation: effect of process parameters on granule size distributionThe purpose of granulation is to improve the flowability of powders, whilst reducing the dustiness and potential of segregation. The focus of this project is to understand the effects of the process parameters of fluidized bed granulation on the granule size distribution of the final product using gFP simulation software (Siemens PSE, UK). The wet granulation process has become predominant and important in the pharmaceutical industry, due to its cost-effectiveness and its robustness in product formulation. The process parameters that were subject of this study include the air flow rate of 20, 40 and 60 m3/hr., the binder concentration of 6, 9 and 12 wt.%, and the binder spray rate of 7.14, 14.28 and 21.42 ml/min. The results show that binder spray rate has the most impact on the granule size distribution, where an increase in binder spray rate is associated with a higher incidence of larger granules in the product. The air flow rate and the binder concentration have a negligible impact on the granule size distribution when agglomeration and consolidation models are not implemented in the simulation.
-
12-CS2 production from methane reforming with H2SMethane reforming in the presence of hydrogen sulfide (H2SMR) is not only conspicuous in terms of producing valuable material but also because of its advantages in obtaining hydrogen as a clean fuel. Substitution of traditional hydrogen production processes such as methane steam reforming (MSR), elimination of natural gas amine–based H2S removal, and sulfur recovery processes have attracted much attention. The current hydrogen production is associated with consuming energy that is usually supplied by burning fossil fuels. Thus, producing hydrogen by current high greenhouse gas emitter methods seems not to be a rational approach to benefit from this clean energy source. Additionally, H2SMR with the potential of producing four moles of hydrogen and one mole of CS2 from methane could be a promising alternative as providing the opportunity to benefit from producing cleaner fuels and simultaneously making CS2 that is used for the production of more valuable products. This chapter reviews the recent progress in CS2 production from methane reforming in the presence of H2S and brings the effect of dominant parameters on this process.
-
Utilization of carbon dioxide using electrochemical reduction: A reviewThis article explores the electro-chemical Carbon dioxide Reduction Cell (eCO2RC), delving into fundamental principles, methods, applications, and the latest approaches for converting CO2 emissions into valuable products. Product outcomes depend on electron exchange and electrode surface attributes used in the CO2 reduction. The study focuses on C1 and C2 products, emphasizing the necessity for selective materials and catalysts to enhance product recovery while minimizing energy consumption. Converting eCO2 into valuable products is seen as a crucial method for transforming waste into value, addressing the challenge of mitigating global warming through gas emission reduction.
-
Geothermal well systems and reservoir aspects: drilling, completion, and energy extraction methodsThe current work’s main aims are to discuss and introduce the main aspects of geothermal wells and reservoirs, including well systems, heating, drilling, and completion. There are several systems and methods for extracting heat energy from underground formations, such as open/closed-looped, vertical/horizontal, pond, and slinky mechanisms, which require different distribution and efficient energy transfer systems. The geothermal well completion method and cementing process are similar to hydrocarbon wells. However, the materials and cement used in geothermal wells must be compatible with hot water and high-pressure-high-temperature (HPHT) steam. Therefore, careful planning for compatible drilling-completion operations with geothermal reservoirs is essential.
-
Process simulation of twin-screw granulation: A reviewTwin-screw granulation has emerged as a key process in powder processing industries and in the pharmaceutical sector to produce granules with controlled properties. This comprehensive review provides an overview of the simulation techniques and approaches that have been employed in the study of twin-screw granulation processes. This review discusses the major aspects of the twin-screw granulation process which include the fundamental principles of twin-screw granulation, equipment design, process parameters, and simulation methodologies. It highlights the importance of operating conditions and formulation designs in powder flow dynamics, mixing behaviour, and particle interactions within the twin-screw granulator for enhancing product quality and process efficiency. Simulation techniques such as the population balance model (PBM), computational fluid dynamics (CFD), the discrete element method (DEM), process modelling software (PMS), and other coupled techniques are critically discussed with a focus on simulating twin-screw granulation processes. This paper examines the challenges and limitations associated with each simulation approach and provides insights into future research directions. Overall, this article serves as a valuable resource for researchers who intend to develop their understanding of twin-screw granulation and provides insights into the various techniques and approaches available for simulating the twin-screw granulation process.
-
Unlocking power: impact of physical and mechanical properties of biomass wood pellets on energy release and carbon emissions in power sectorThis study investigates the physical and mechanical properties of 12 biomass wood pellet samples utilised in a power generation, focusing on their implications for energy release and carbon emissions during combustion. Through comprehensive analysis involving bulk density measurements, compression tests, moisture analysis, calorimetry and controlled burning experiments, significant correlations among key properties are identified. Pellets with densities above 1100 kg/m3 demonstrate superior mechanical durability and strength, achieving maximum strengths of 0.6 to 0.8 kN with durability exceeding 99.4%. Optimal moisture content, typically between 6 and 7% is crucial for maximising density, bulk density, mechanical durability and fracture resistance, ensuring robust pellet structure and performance. The research underscores the impact of pellet dimensions, highlighting those longer lengths, > 12 mm enhance durability, while larger diameters > 8 mm exhibit reduced durability. Elemental analysis focusing on calcium, silicon and potassium plays a critical role in predicting and managing combustion system fouling, potentially reducing operational costs. Moreover, the study emphasises the significant influence of oxygen levels during combustion on CO2 emissions, achieving optimal results with moisture content in the 7–8% range for maximum higher heating value (HHV). The moisture content in the 14–15% range represents the lowest CO2 emission. The findings underscore the intricacy of the system and the interplay of parameters with one another. In accordance with the priority of each application, the selection of parameters warrants careful consideration.
-
An investigation on hydrate prediction and inhibition: An industrial case studyThis investigation reports the first study to predict natural gas hydrate formation using both Aspen HYSYS® and HydraFlash software for various gas compositions and thermodynamic inhibitors (monoethylene glycol [MEG] concentrations at 10, 20, 30, and 40 wt.% and methanol concentrations at 10 and 20 wt.%). The simulated predictions are compared with the results of the experimental data in the literature. It has been shown that HydraFlash software can accurately predict hydrate formation conditions for a given industrial case, without having to carry out costly experimental work. This work also evaluated the effect of inhibitors and it appears that inhibitor type and concentration are determined according to condition of gas composition. MEG is consequently selected as the most ideal hydrate inhibitor for the industrial case. This also was confirmed through COSMO-RS studies in which the sigma profile and sigma potential of the considered inhibitors were obtained and presented using density functional (DFT) calculations to verify the hydrogen bonding affinities of the inhibitors to water molecules. HydraFlash was utilized to predict the dissociation conditions of hydrates under the influence of a high concentration of MEG inhibition, reaching up to 40 wt.% at 313 K and a pressure of 311.1 bar. Finally, it is shown that both software packages are quite accurate and useful tools for the prediction of hydrate for simple systems. However, HydraFlash can simulate more complex systems, including different types of salts at higher pressures. Investigation results indicate insightful guidance for accurately predicting hydrate dissociation under simulated conditions.
-
Certifiability analysis of machine learning systems for low-risk automotive applicationsMachine learning (ML) is increasingly employed for automating complex tasks, specifically in autonomous driving. While ML applications bring us closer to fully autonomous systems, they simultaneously introduce security and safety risks specific to safety-critical systems. Existing methods of software development and systems based on ML are fundamentally different. Moreover, the existing certification methods for automotive systems cannot fully certify the safe operation of ML-based components and subsystems. This is because existing safety certification criteria were formulated before the advent of ML. Therefore, new or adapted methods are needed to certify ML-based systems. This article analyses the existing safety standard, ISO26262, for automotive applications, to determine the certifiability of ML approaches used in low-risk automotive applications. This will contribute towards addressing the task of assuring the security and safety of ML-based autonomous driving systems, particularly for low-risk automotive applications, to gain the trust of regulators, certification agencies, and stakeholders.
-
Machine learning predictions for bending capacity of ECC-concrete composite beams hybrid reinforced with steel and FRP barsThis paper explores the development of the most suitable machine learning models for predicting the bending capacity of steel and FRP (Fiber Reinforced Ploymer) bars hybrid reinforced ECC (Engineered Cementitious Composites)-concrete composite beams. Five different machine learning models, namely Support Vector Regression (SVR), Extreme Gradient Boosting (XGBoost), Multilayer Perceptron (MLP), Random Forest (RF), and Extremely Randomized Trees (ERT), were employed. To train and evaluate these predictive models, the study utilized a database comprising 150 experimental data points from the literature on steel and FRP bars hybrid reinforced ECC-concrete composite beams. Additionally, Shapley Additive Explanations (SHAP) analysis was employed to assess the impact of input features on the prediction outcomes. Furthermore, based on the optimal model identified in the research, a graphical user interface (GUI) was designed to facilitate the analysis of the bending capacity of hybrid reinforced ECC-concrete composite beams in practical applications. The results indicate that the XGBoost algorithm exhibits high accuracy in predicting bending capacity, demonstrating the lowest root mean square error, mean absolute error, and mean absolute percentage error, as well as the highest coefficient of determination on the testing dataset among all models. SHAP analysis indicates that the equivalent reinforcement ratio, design strength of FRP bars, and height of beam cross-section are significant feature parameters, while the influence of the compressive strength of concrete is minimal. The predictive models and graphical user interface (GUI) developed can offer engineers and researchers with a reliable predictive method for the bending capacity of steel and FRP bars hybrid reinforced ECC-concrete composite beams.
-
Enhancing safety in IoT systems: A model-based assessment of a smart irrigation system using fault tree analysisThe agricultural industry has the potential to undergo a revolutionary transformation with the use of Internet of Things (IoT) technology. Crop monitoring can be improved, waste reduced, and efficiency increased. However, there are risks associated with system failures that can lead to significant losses and food insecurity. Therefore, a proactive approach is necessary to ensure the effective safety assessment of new IoT systems before deployment. It is crucial to identify potential causes of failure and their severity from the conceptual design phase of the IoT system within smart agricultural ecosystems. This will help prevent such risks and ensure the safety of the system. This study examines the failure behaviour of IoT-based Smart Irrigation Systems (SIS) to identify potential causes of failure. This study proposes a comprehensive Model-Based Safety Analysis (MBSA) framework to model the failure behaviour of SIS and generate analysable safety artefacts of the system using System Modelling Language (SysML). The MBSA approach provides meticulousness to the analysis, supports model reuse, and makes the development of a Fault Tree Analysis (FTA) model easier, thereby reducing the inherent limitations of informal system analysis. The FTA model identifies component failures and their propagation, providing a detailed understanding of how individual component failures can lead to the overall failure of the SIS. This study offers valuable insights into the interconnectedness of various component failures by evaluating the SIS failure behaviour through the FTA model. This study generates multiple minimal cut sets, which provide actionable insights into designing dependable IoT-based SIS. This analysis identifies potential weak points in the design and provides a foundation for safety risk mitigation strategies. This study emphasises the significance of a systematic and model-driven approach to improving the dependability of IoT systems in agriculture, ensuring sustainable and safe implementation.
-
Analysis of detection systems in a Software-Defined NetworkSoftware-Defined Networking (SDN), a novel and innovative networking technology, offers programmability and flexibility within networks and centralized control of those networks. The separation of data and control planes, as well as the concentration of all control provisioning options within a SDN controller, are two of the most significant ways in which SDN improves on traditional network deployments. However, because different planes in an SDN network are separated, the network contains several attack vectors that malicious users could exploit. Distributed Denial-of-Service (DDoS) attacks pose a unique threat to SDN because they can disrupt connections between the controller and data plane devices. Therefore, developing and implementing intrusion detection systems (IDS) in SDN is necessary. This paper investigates IDS in software-defined networks for effectively detecting DDoS attacks using signature-based and machine learning (ML)-based approaches. Mininet and OpenDayLight are used to simulate an SDN environment in which normal and attack traffic is generated to assess intrusion detection techniques. The Snort IDS is employed as the signature-based IDS in this study, while the ML algorithms, Random Forest (RF), J48, Naive Bayes (NB), and Support Vector Machine (SVM) are used to implement the ML-based IDS. The IDS are examined using SDN-generated traffic, with the InSDN-NB model surpassing all other ML models and Snort IDS with 98.86% prediction accuracy and a train time of 1.46s.
-
A multi-layered defence strategy against DDoS attacks in SDN/NFV-based 5G mobile networksSoftware-defined networking (SDN) and network functions virtualisation (NFV) are crucial technologies for integration in the fifth generation of cellular networks (5G). However, they also pose new security challenges, and a timely research subject is working on intrusion detection systems (IDSs) for 5G networks. Current IDSs suffer from several limitations, resulting in a waste of resources and some security threats. This work proposes a new three-layered solution that includes forwarding and data transport, management and control, and virtualisation layers, emphasising distributed controllers in the management and control layer. The proposed solution uses entropy detection to classify arriving packets as normal or suspicious and then forwards the suspicious packets to a centralised controller for further processing using a self-organising map (SOM). A dynamic OpenFlow switch relocation method is introduced based on deep reinforcement learning to address the unbalanced burden among controllers and the static allocation of OpenFlow switches. The proposed system is analysed using the Markov decision process, and a Double Deep Q-Network (DDQN) is used to train the system. The experimental results demonstrate the effectiveness of the proposed approach in mitigating DDoS attacks, efficiently balancing controller workloads, and reducing the duration of the balancing process in 5G networks.
-
Comparative analysis of automated scanning and manual penetration testing for enhanced cybersecurityWeb platform security has become a significant concern in the current cyber world. Adversaries constantly advance their skills and technologies to bypass modern cyber defence techniques to lure website vulnerabilities. In the cyber world, finding and mitigating vulnerabilities on the website is essential to avoid any damage to the organization. Two key techniques - vulnerability assessment and penetration testing - play a crucial role in identifying and mitigating these weaknesses. While vulnerability assessment scans the platform, revealing potential flaws, penetration testing goes a step further, simulating real-world attack scenarios to assess their true exploitability and possible damage. This paper compares automated scanning and manual penetration testing to evaluate the effectiveness of these techniques in uncovering vulnerabilities. The experimental results confirm that manual penetration testing is more effective than automated testing in terms of accuracy. Additionally, practical studies highlight the importance of a penetration tester's skills and experience in identifying and exploiting security weaknesses. Automated tools may also generate false positive results.
-
Enhancing Zero Trust models in the financial industry through blockchain integration: A proposed frameworkAs financial institutions navigate an increasingly complex cyber threat landscape and regulatory ecosystem, there is a pressing need for a robust and adaptive security architecture. This paper introduces a comprehensive, Zero Trust model-based framework specifically tailored for the finance industry. It encompasses identity and access management (IAM), data protection, and device and network security and introduces trust through blockchain technology. This study provides a literature review of existing Zero Trust paradigms and contrasts them with cybersecurity solutions currently relevant to financial settings. The research adopts a mixed methods approach, combining extensive qualitative analysis through a literature review and assessment of security assumptions, threat modelling, and implementation strategies with quantitative evaluation using a prototype banking application for vulnerability scanning, security testing, and performance testing. The IAM component ensures robust authentication and authorisation processes, while device and network security measures protect against both internal and external threats. Data protection mechanisms maintain the confidentiality and integrity of sensitive information. Additionally, the blockchain-based trust component serves as an innovative layer to enhance security measures, offering both tamper-proof verification and increased integrity. Through analysis of potential threats and experimental evaluation of the Zero Trust model’s performance, the proposed framework offers financial institutions a comprehensive security architecture capable of effectively mitigating cyber threats and fostering enhanced consumer trust.
-
Enhancing design and performance analysis of satellite EB/CV-QKD/FSO systemsSatellite QKD/FSO systems, which facilitate quantum key distribution (QKD) over free-space optical (FSO) links between satellites and ground stations, present a promising pathway toward achieving global security in upcoming sixth-generation (6G) wireless communications. Our study focuses on a superior type of these systems, the satellite EB/CV-QKD/FSO, which utilizes the continuous-variable (CV) method for quantum state representation and the entanglement-based (EB) scheme for QKD implementation. We propose the use of optical phase-shift keying (QPSK) signaling and dual-threshold/heterodyne detection (DT/HD) receivers to bolster the reliability and feasibility of satellite EB/CV-QKD/FSO systems. Closed-form expressions for key system performance metrics are derived using improved channel modeling. Numerical results are presented to showcase the effects of channel impairments on the system performance. We also provide recommendations for optimal system setup parameters, aiming to enhance performance.
-
Acquisition time in laser inter-satellite link under satellite vibrationsPointing, acquisition, and tracking (PAT) is a major technical challenge of laser inter-satellite links (ISLs). For the fast establishment of laser link and the maximization of communication time, it is of importance to minimize the acquisition time. Satellite vibrations affect the PAT procedure adversely, and thus serve to increase the acquisition time. In this paper, we investigate through theoretical analysis the average acquisition time of laser ISLs in the presence of satellite vibrations. The analytic expression about the time taken from the beginning of spiral scan to the acquisition of the scan beam is provided in the presence of the pointing errors caused by vibrations. We also derive the optimum beam divergence angle for the acquisition time. The analyses are validated by Monte-Carlo computer simulations and a proof-of-concept experiment. The results show that the acquisition time can be minimized by adjusting the beam divergence angle adaptively to the link conditions.
-
Time-efficient simulation of free-space optical communication systems under atmospheric turbulence, pointing error, and angle-of-arrival fluctuationsComputer simulation is a powerful and convenient tool for the design and performance evaluation of free-space optical (FSO) communication systems. In this article, we present two simulation frameworks that incorporate not only the effects of atmospheric turbulence but also the impact of the angular fluctuations of the transmitter and receiver in FSO systems. In the first framework, the waveform of the optical signal is calculated sequentially from the transmitter to the receiver. Thus, it takes very long to run the simulation numerous times to obtain the statistical performance of the system. This is because the vast majority of simulation time is spent on the split-step beam propagation. In the second framework, we propose to isolate the beam propagation through atmospheric channel from the other effects. We compare the two frameworks in terms of accuracy and simulation time. We show that the second framework reduces the simulation time by more than a factor of 10 without sacrificing the accuracy under various conditions.