Now showing items 21-40 of 2638

    • Generalized Concept and MATLAB Code for Modeling and Analyzing Wideband 90◦ Stub-Loaded Phase Shifters with Simulation and Experimental Verifications

      Alnahwi, F.M.; Al-Yasir, Yasir I.A.; See, C.H.; Abdullah, A.S.; Abd-Alhameed, Raed A. (MDPI, 2023-09)
      In the design of phase shifters, the modeling equations are too complicated and require some approximations to be derived correctly by hand. In response to this problem, this paper presents a generalized concept, algorithm, and MATLAB code that provide the exact modeling equations of the transmission parameters and the scattering parameters of any 90^o wideband stub-loaded phase shifter. The proposed code gives the modeling equations in term of variables for any number of stubs and characteristic impedance value by utilizing the symbol-based analysis of the MATLAB. It also illustrates the results as a function of normalized frequency relative to the center frequency f_o, and can be and can be tailored to any user-defined frequency range. As a matter of comparison, a three-stub wideband 90^o stub-loaded phase shifter is simulated using CST Microwave Studio and experimentally fabricated on Rogers RT5880 dielectric substrate with dimensions of 30×40×0.8 〖mm〗^3. The comparison reveals the accuracy of the proposed computerized modeling with -10 dB impedance bandwidth equal to 90% (0.55 fo-1.45 fo), (90∓5 degrees ) phase difference bandwidth equal to 100% (0.5 fo-1.5 fo), and negligible insertion loss. The novelty of this work is that the proposed code provides the exact modeling equations of the stub-loaded phase shifter for any number of stubs regardless the complexity of the mathematical derivations.
    • Developing Dependable IoT Systems: Safety Perspective

      Abdulhamid, Alhassan; Kabir, Sohag; Ghafir, Ibrahim; Lei, Ci (2023-08)
      The rapid proliferation of internet-connected devices in public and private spaces offers humanity numerous conveniences, including many safety benefits. However, unlocking the full potential of the Internet of Things (IoT) would require the assurance that IoT devices and applications do not pose any safety hazards to the stakeholders. While numerous efforts have been made to address security-related challenges in the IoT environment, safety issues have yet to receive similar attention. The safety attribute of IoT systems has been one of the system’s vital non-functional properties and a remarkable attribute of its dependability. IoT systems are susceptible to safety breaches due to a variety of factors, such as hardware failures, misconfigurations, conflicting interactions of devices, human error, and deliberate attacks. Maintaining safety requirements is challenging due to the complexity, autonomy, and heterogeneity of the IoT environment. This article explores safety challenges across the IoT architecture and some application domains and highlights the importance of safety attributes, requirements, and mechanisms in IoT design. By analysing these issues, we can protect people from hazards that could negatively impact their health, safety, and the environment.
    • Adaptation of Model Transformation for Safety Analysis of IoT-based Applications

      Abdulhamid, Alhassan; Kabir, Sohag; Ghafir, Ibrahim; Lei, Ci (2023-08)
      The Internet of Things (IoT) paradigm has continued to provide valuable services across various domains. However, guaranteeing the safety assurance of the IoT system is increasingly becoming a concern. While the growing complexity of IoT design has brought additional safety requirements, developing safe systems remains a critical design objective. In earlier studies, a limited number of approaches have been proposed to evaluate the safety requirements of IoT systems through the generation of static safety artefacts based on manual processes. This paper proposes a model-based approach to the safety analysis of the IoT system. The proposed framework explores the expressiveness of UML/SysML graphical modelling languages to develop a dynamic fault tree (DFT) as an analysis artefact of the IoT system. The framework was validated using a hypothetical IoT-enabled Smart Fire Detection and Prevention System (SFDS). The novel framework can capture dynamic failure behaviour, often ignored in most model-based approaches. This effort complements the inherent limitations of existing manual static failure analysis of the IoT systems and, consequently, facilitates a viable safety analysis that increases public assurance in the IoT systems.
    • A fuzzy data-driven reliability analysis for risk assessment and decision making using Temporal Fault Trees

      Kabir, Sohag (Elsevier, 2023-09)
      Fuzzy data-driven reliability analysis has been used in different safety-critical domains for risk assessment and decision-making where precise failure data is non-existent. Expert judgements and fuzzy set theory have been combined with different variants of fault trees as part of fuzzy data-driven reliability analysis studies. In such fuzzy fault tree analyses, different people represented failure data using different membership functions for the fuzzy set, and different parameters were set differently in the expert opinion elicitation process. Due to the availability of a wide variety of options, it is possible to obtain different outcomes when choosing one option over another. This article performed an analysis in the context of fuzzy data-based temporal fault tree analysis to investigate the effect of choosing different membership functions on the estimated system reliability and criticality ranking of different failure events. Moreover, the effect of using different values for the relaxation factor, a parameter set during the expert elicitation process, was studied on the system reliability and criticality evaluation. The experiments on the fuel distribution system case study show system reliability did not vary when triangular and trapezoidal fuzzy numbers were used with the same upper and lower bounds. However, it was seen that the criticality rankings of a couple of events were changed due to choosing different membership functions and different values of relaxation factor
    • Characterization and life cycle assessment of geopolymer mortars with masonry units and recycled concrete aggregates assorted from construction and demolition waste

      Kul, A.; Ozel, B.F.; Ozcelikci, E.; Gunal, M.F.; Ulugol, H.; Yildirim, Gurkan; Sahmaran, M. (2023-11)
      Developing a fast, cost-effective, eco-friendly solution to recycle large amounts of construction and demolition waste (CDW) generated from construction industry-related activities and natural disasters is crucial. The present investigation aims to offer a solution for repurposing CDW into building materials suitable for accelerated construction and housing in developing countries and disaster-prone areas. Feasibility of recycled concrete aggregate (RCA) inclusion in geopolymer mortars constituted entirely from CDW (masonry elements) was investigated via an environmental impact-oriented approach by addressing the composition related key parameters. Mechanical performance was evaluated through compressive strength tests, and scanning electron microscope (SEM) imaging with line mapping analyses were carried out to monitor the interfacial transition zone (ITZ) properties. To investigate the environmental impacts of the geopolymer mortars and highlight the advantages over Portland cement-based mortars, a cradle-to-gate life cycle assessment (LCA) was performed. Findings revealed that roof tile (RT)-based geopolymer mortars mainly exhibited better strength performance due to their finer particle size. Mixtures activated with 15 M NaOH solution and cured at 105 °C achieved an average compressive strength above 55 MPa. RCA size was the most influential parameter on compressive strength, and a smaller maximum RCA size significantly increased the compressive strength. Microstructural analyses showed that the ITZ around smaller RCAs was relatively thinner, resulting in better compressive strength results. LCA proved that CDW-based geopolymer mortars provide the same compressive strength with around 60% less CO2 emissions and similar energy consumption compared to Portland cement-based mortars.
    • Transformative role of big data through enabling capability recognition in construction

      Atuahene, Bernard T.; Kanjanabootra, S.; Gajendran, T. (Taylor and Francis (Routledge), 2023)
      Big data application is a significant transformative driver of change in the retail, health, engineering, and advanced manufacturing sectors. Big data studies in construction are still somewhat limited, although there is increasing interest in what big data application could achieve. Through interviews with construction professionals, this paper identifies the capabilities needed in construction firms to enable the accrual of the potentially transformative benefits of big data application in construction. Based on previous studies, big data application capabilities, needed to transform construction processes, focussed on data, people, technology, and organisation. However, the findings of this research suggest a critical modification to that focus to include knowledge and the organisational environment along with people, data, and technology. The research findings show that construction firms use big data with a combination strategy to enable transformation by (a) driving an in-house data management policy to rolling-out the big data capabilities; (b) fostering collaborative capabilities with external firms for resource development, and (c) outsourcing big data services to address the capabilities deficits impacting digital transformation.
    • Early diagnosis and personalised treatment focusing on synthetic data modelling: Novel visual learning approach in healthcare

      Mahmoud, Ahsanullah Y.; Neagu, Daniel; Scrimieri, Daniele; Abdullatif, Amr R.A. (2023-09)
      The early diagnosis and personalised treatment of diseases are facilitated by machine learning. The quality of data has an impact on diagnosis because medical data are usually sparse, imbalanced, and contain irrelevant attributes, resulting in suboptimal diagnosis. To address the impacts of data challenges, improve resource allocation, and achieve better health outcomes, a novel visual learning approach is proposed. This study contributes to the visual learning approach by determining whether less or more synthetic data are required to improve the quality of a dataset, such as the number of observations and features, according to the intended personalised treatment and early diagnosis. In addition, numerous visualisation experiments are conducted, including using statistical characteristics, cumulative sums, histograms, correlation matrix, root mean square error, and principal component analysis in order to visualise both original and synthetic data to address the data challenges. Real medical datasets for cancer, heart disease, diabetes, cryotherapy and immunotherapy are selected as case studies. As a benchmark and point of classification comparison in terms of such as accuracy, sensitivity, and specificity, several models are implemented such as k-Nearest Neighbours and Random Forest. To simulate algorithm implementation and data, Generative Adversarial Network is used to create and manipulate synthetic data, whilst, Random Forest is implemented to classify the data. An amendable and adaptable system is constructed by combining Generative Adversarial Network and Random Forest models. The system model presents working steps, overview and flowchart. Experiments reveal that the majority of data-enhancement scenarios allow for the application of visual learning in the first stage of data analysis as a novel approach. To achieve meaningful adaptable synergy between appropriate quality data and optimal classification performance while maintaining statistical characteristics, visual learning provides researchers and practitioners with practical human-in-the-loop machine learning visualisation tools. Prior to implementing algorithms, the visual learning approach can be used to actualise early, and personalised diagnosis. For the immunotherapy data, the Random Forest performed best with precision, recall, f-measure, accuracy, sensitivity, and specificity of 81%, 82%, 81%, 88%, 95%, and 60%, as opposed to 91%, 96%, 93%, 93%, 96%, and 73% for synthetic data, respectively. Future studies might examine the optimal strategies to balance the quantity and quality of medical data.
    • An overview of safety and security analysis frameworks for the Internet of Things

      Abdulhamid, Alhassan; Kabir, Sohag; Ghafir, Ibrahim; Lei, Ci (2023-07)
      The rapid progress of the Internet of Things (IoT) has continued to offer humanity numerous benefits, including many security and safety-critical applications. However, unlocking the full potential of IoT applications, especially in high-consequence domains, requires the assurance that IoT devices will not constitute risk hazards to the users or the environment. To design safe, secure, and reliable IoT systems, numerous frameworks have been proposed to analyse the safety and security, among other properties. This paper reviews some of the prominent classical and model-based system engineering (MBSE) approaches for IoT systems’ safety and security analysis. The review established that most analysis frameworks are based on classical manual approaches, which independently evaluate the two properties. The manual frameworks tend to inherit the natural limitations of informal system modelling, such as human error, a cumbersome processes, time consumption, and a lack of support for reusability. Model-based approaches have been incorporated into the safety and security analysis process to simplify the analysis process and improve the system design’s efficiency and manageability. Conversely, the existing MBSE safety and security analysis approaches in the IoT environment are still in their infancy. The limited number of proposed MBSE approaches have only considered limited and simple scenarios, which are yet to adequately evaluate the complex interactions between the two properties in the IoT domain. The findings of this survey are that the existing methods have not adequately addressed the analysis of safety/security interdependencies, detailed cyber security quantification analysis, and the unified treatment of safety and security properties. The existing classical and MBSE frameworks’ limitations obviously create gaps for a meaningful assessment of IoT dependability. To address some of the gaps, we proposed a possible research direction for developing a novel MBSE approach for the IoT domain’s safety and security coanalysis framework.
    • Optimal planning and operation of distribution systems using network reconfiguration and flexibility services

      Marquez, Jorge A.; Al-Ja'Afreh, Mohammad A.; Mokryani, Geev; Kabir, Sohag; Campean, I. Felician; Dao, Cuong D.; Riaz, Sana (Elsevier, 2023-12)
      This paper proposes a novel approach for the reliability cost-based optimization of Distribution Systems (DS), considering tie line-based network reconfiguration method with integration of Distributed Energy Resources (DER). An optimal Energy not Supplied (ENS) index is proposed, where the capacity is handled by curtailment devices in the network such as sectionalizers and the energy supplied by DERs which considers Flexibility Services (FS) within a market environment. The decision variables include the investment and operation of tie-lines and buying regulation services from DER such as Distributed Generation (DG) and Battery Energy Storage Systems (BESS). The results validate the cost-effectiveness of the proposed method through implementation of these technologies to improve the reliability of the DS, within a comprehensive set of case-study scenarios for a 16-bus UK generic distribution system (UKGDS). The case study results indicate that significant savings can be achieved through the proposed method, ranging between 36%–71% depending on the level of automation in tie-line operations in combination with the settlement price for the power-balance of FS. This illustrates that the proposed DS reliability cost-based optimization method could have a significant impact for real world DG and BESS applications.
    • A knowledge-driven model to assess inherent safety in process infrastructure

      Gholamizadeh, K.; Zarei, E.; Kabir, Sohag; Mamudu, A.; Aala, Y.; Mohammadfam, I. (2023-06)
      Process safety has drawn increasing attention in recent years and has been investigated from different perspectives, such as quantitative risk analysis, consequence modeling, and regulations. However, rare attempts have been made to focus on inherent safety design assessment, despite being the most cost-effective safety tactic and its vital role in sustainable development and safe operation of process infrastructure. Accordingly, the present research proposed a knowledge-driven model to assess inherent safety in process infrastructure under uncertainty. We first developed a holistic taxonomy of contributing factors into inherent safety design considering chemical, reaction, process, equipment, human factors, and organizational concerns associated with process plants. Then, we used subject matter experts, content validity ratio (CVR), and content validity index (CVI) to validate the taxonomy and data collection tools. We then employed a fuzzy inference system and the Extent Analysis (EA) method for knowledge acquisition under uncertainty. We tested the proposed model on a steam methane-reforming plant that produces hydrogen as renewable energy. The findings revealed the most contributing factors and indicators to improve the inherent safety design in the studied plant and effectively support the decision-making process to assign proper safety countermeasures.
    • Poloxamer-based nanogels as delivery systems: how structural requirements can drive their biological performance

      Shriky, Banah; Vigato, A.A.; Sepulveda, A.F.; Machado, I.P.; Ribeiro de Araujo, D. (springer, 2023-08)
      Poloxamers or Pluronics®-based nanogels are one of the most used matrices for developing delivery systems. Due to their thermoresponsive and flexible mechanical properties, they allowed the incorporation of several molecules including drugs, biomacromolecules, lipid-derivatives, polymers, and metallic, polymeric, or lipid nanocarriers. The thermogelling mechanism is driven by micelles formation and their self-assembly as phase organizations (lamellar, hexagonal, cubic) in response to microenvironmental conditions such as temperature, osmolarity, and additives incorporated. Then, different biophysical techniques have been used for investigating those structural transitions from the mechanisms to the preferential component’s orientation and organization. Since the design of PL-based pharmaceutical formulations is driven by the choice of the polymer type, considering its physico-chemical properties, it is also relevant to highlight that factors inherent to the polymeric matrix can be strongly influenced by the presence of additives and how they are able to determine the nanogels biopharmaceuticals properties such as bioadhesion, drug loading, surface interaction behavior, dissolution, and release rate control. In this review, we discuss the general applicability of three of the main biophysical techniques used to characterize those systems, scattering techniques (small-angle X-ray and neutron scattering), rheology and Fourier transform infrared absorption spectroscopy (FTIR), connecting their supramolecular structure and insights for formulating effective therapeutic delivery systems.
    • Enhancement of precise underwater object localization

      Kaveripakum, S.; Chinthaginjala, R.; Anbazhagan, R.; Alibakhshikenari, M.; Virdee, B.; Khan, S.; Pau, G.; See, C.H.; Dayoub, I.; Livreri, P.; et al. (Radio Science, 2023-09)
      Underwater communication applications extensively use localization services for object identification. Because of their significant impact on ocean exploration and monitoring, underwater wireless sensor networks (UWSN) are becoming increasingly popular, and acoustic communications have largely overtaken radio frequency (RF) broadcasts as the dominant means of communication. The two localization methods that are most frequently employed are those that estimate the angle of arrival (AOA) and the time difference of arrival (TDoA). The military and civilian sectors rely heavily on UWSN for object identification in the underwater environment. As a result, there is a need in UWSN for an accurate localization technique that accounts for dynamic nature of the underwater environment. Time and position data are the two key parameters to accurately define the position of an object. Moreover, due to climate change there is now a need to constrain energy consumption by UWSN to limit carbon emission to meet net-zero target by 2050. To meet these challenges, we have developed an efficient localization algorithm for determining an object position based on the angle and distance of arrival of beacon signals. We have considered the factors like sensor nodes not being in time sync with each other and the fact that the speed of sound varies in water. Our simulation results show that the proposed approach can achieve great localization accuracy while accounting for temporal synchronization inaccuracies. When compared to existing localization approaches, the mean estimation error (MEE) and energy consumption figures, the proposed approach outperforms them. The MEEs is shown to vary between 84.2154m and 93.8275m for four trials, 61.2256m and 92.7956m for eight trials, and 42.6584m and 119.5228m for twelve trials. Comparatively, the distance-based measurements show higher accuracy than the angle-based measurements.
    • Metamaterials and Metasurfaces

      Ojaroudi Parchin, Naser; Ojaroudi, M.; Abd-Alhameed, Raed A. (MDPI, 2023-07)
    • Machine learning based small bowel video capsule endoscopy analysis: Challenges and opportunities

      Wahab, Haroon; Mehmood, Irfan; Ugail, Hassan; Sangaiah, A.K.; Muhammad, K. (2023)
      Video capsule endoscopy (VCE) is a revolutionary technology for the early diagnosis of gastric disorders. However, owing to the high redundancy and subtle manifestation of anomalies among thousands of frames, the manual construal of VCE videos requires considerable patience, focus, and time. The automatic analysis of these videos using computational methods is a challenge as the capsule is untamed in motion and captures frames inaptly. Several machine learning (ML) methods, including recent deep convolutional neural networks approaches, have been adopted after evaluating their potential of improving the VCE analysis. However, the clinical impact of these methods is yet to be investigated. This survey aimed to highlight the gaps between existing ML-based research methodologies and clinically significant rules recently established by gastroenterologists based on VCE. A framework for interpreting raw frames into contextually relevant frame-level findings and subsequently merging these findings with meta-data to obtain a disease-level diagnosis was formulated. Frame-level findings can be more intelligible for discriminative learning when organized in a taxonomical hierarchy. The proposed taxonomical hierarchy, which is formulated based on pathological and visual similarities, may yield better classification metrics by setting inference classes at a higher level than training classes. Mapping from the frame level to the disease level was structured in the form of a graph based on clinical relevance inspired by the recent international consensus developed by domain experts. Furthermore, existing methods for VCE summarization, classification, segmentation, detection, and localization were critically evaluated and compared based on aspects deemed significant by clinicians. Numerous studies pertain to single anomaly detection instead of a pragmatic approach in a clinical setting. The challenges and opportunities associated with VCE analysis were delineated. A focus on maximizing the discriminative power of features corresponding to various subtle lesions and anomalies may help cope with the diverse and mimicking nature of different VCE frames. Large multicenter datasets must be created to cope with data sparsity, bias, and class imbalance. Explainability, reliability, traceability, and transparency are important for an ML-based diagnostics system in a VCE. Existing ethical and legal bindings narrow the scope of possibilities where ML can potentially be leveraged in healthcare. Despite these limitations, ML based video capsule endoscopy will revolutionize clinical practice, aiding clinicians in rapid and accurate diagnosis.
    • The Effects of Vaporisation Models on the FCC Riser Reactor

      Selalame, Thabang W.; Patel, Raj; Mujtaba, Iqbal M.; John, Yakubu M. (2023-06-20)
      This work presents a steady-state one-dimensional model of the FCC riser considering the vaporisation of the gas oil feed and subsequent cracking reactions. The evaporation of droplets is studied using three models: the classical homogeneous model and the heterogeneous vaporisation models from the literature. Droplets are modelled using the Lagrangian framework model for particles moving through a fluid. This was coupled with the gas–solid flow field describing the catalyst particulate transport in the riser. Cracking reaction kinetics are modelled using a four-lumped model. The model was then validated against published plant data. The model performed well in terms of gas oil conversion, gasoline yield, pressure drop, and phase temperature profiles. Therefore, it is suitable for use in the design and optimisation of new and existing FCC unit risers, particularly in cost–benefit analysis considering the current push away from petroleum energy sources. It was found that vaporisation models are largely insignificant in terms of gas oil conversion profiles and gasoline yield for usual operation conditions of FCC risers, which is a finding that had yet to be proven in the literature. Vaporisation models are shown to only affect conversion and yield when the initial droplet exceeds 2000 μm.
    • Simulation and optimisation of a medium scale reverse osmosis brackish water desalination system under variable feed quality: Energy saving and maintenance opportunity

      Al-Obaidi, M.A.; Alsarayreh, Alanood A.; Bdour, A.; Jassam, S.H.; Rashid, F.L.; Mujtaba, Iqbal M. (2023-11-01)
      In this work, we considered model-based simulation and optimisation of a medium scale brackish water desalination process. The mathematical model is validated using actual multistage RO plant data of Al- Hashemite University (Jordan). Using the validated model, the sensitivity of different operating parameters such as pump pressure, brackish water flow rate and seasonal water temperature (covering the whole year) on the performance indicators such as productivity, product salinity and specific energy consumption of the process is conducted. For a given feed flow rate and pump pressure, winter season produces less freshwater that in summer in line with the assumption that winter water demand is less than that in summer. With the soaring energy prices globally, any opportunity for the reduction of energy is not only desirable from the economic point of view but is an absolute necessity to meet the net zero carbon emission pledge by many nations, as globally most desalination plants use fossil fuel as the main source of energy. Therefore, the second part of this paper attempts to minimise the specific energy consumption of the RO system using model-based optimisation technique. The study resulted not only 19 % reduction in specific energy but also 4.46 % increase in productivity in a particular season of the year. For fixed product demand, this opens the opportunity for scheduling cleaning and maintenance of the RO process without having to consider full system shutdown.
    • Modeling and analysis of hybrid solar water desalination system for different scenarios in Indonesia

      Fairuz, A.; Umam, M.F.; Hasanuzzaman, M.; Rahim, N.A.; Mutaba, Iqbal M. (2023-01-15)
      Clean water demand has significantly increased due to the rise in the global population. However, most water on the Earth has high saline content that cannot be consumed directly; only about one over forty of the total water source is freshwater. Desalinated water is one of the potential solutions to meet the growing demand for freshwater, which is highly energy intensive. This paper analyses the energy, economic and environmental performance of a 5 m3/day PV (photovoltaic) powered reverse osmosis (RO) desalination system. Three scenarios of PV-RO with and without battery storage and diesel generator hybrid systems have been analyzed and investigated for the annual estimate load, net present value, and payback period of the water and electricity production costs. Also, the CO2 avoidance over the lifetime operation of all scearios is evaluated. This study shows that the PV-RO system without battery with 6.3 kW PV panels installed and with a 2-days water storage tank system is the most profitable economically f. For this scenario, the Levelized Cost of Electricity (LCOE), Levelized Cost of Water (LCOW), and Payback Period (PBP) are found to be $0.154/kWh, $0.627/m3, and five years, respectively. In addition, for this scenario, the CO2 emissions avoidance was the maximum (111,690 kg.CO2eq per year) compared to other scenarios.
    • Dynamic modelling and simulation of industrial scale multistage flash desalination process

      Hasan, Hasan; Alsadaie, S.; Al-Obaidi, M.A.; Mujtaba, Iqbal M. (2023-05-01)
      Multistage Flash (MSF) desalination process is still a dominant process, especially in the Gulf region, to produce high quality freshwater. Although there has been energy price surge in recent years, MSF process will continue to operate in that region for some foreseeable future. The key challenge is how to make such processes still profitable. Understanding the dynamics of any processes under uncertainty and disturbances is very important to make a process operationally feasible and profitable. The main aim of this work is to understand the dynamics of industrial scale MSF process using high fidelity and reliable process model. For this purpose, a detailed dynamic model for the MSF process incorporating key and new features is developed and validated against the actual data of a large-scale seawater desalination plant. The model is then used to study the behaviour of large scale MSF processes for disturbances in steam temperature, feed temperature and the recycle brine flow rate. The simulation results show that the last stage requires a longer time to settle compared to the preceding stages. In addition, steam temperature shows insignificant influence on the performance ratio compared to the inlet seawater temperature and recycle brine flow rate. Furthermore, it is found that the productivity of plant can increase in the winter compared to that in the summer. However, this benefit comes at the expense of increased steam consumption in the winter, resulting in a low performance ratio.
    • An Automated Approach to Instrumenting the Up-on-the-Toes Test(s)

      Zahid, Sarah A.; Celik, Y.; Godfrey, A.; Buckley, John G. (MDPI, 2023-06-26)
      Normal ankle function provides a key contribution to everyday activities, particularly step/stair ascent and descent, where many falls occur. The rising to up-on-the-toes (UTT) 30 second test (UTT-30) is used in the clinical assessment of ankle muscle strength/function and endurance and is typically assessed by an observer counting the UTT movement completed. The aims of this study are: (i) to determine whether inertial measurement units (IMUs) provide valid assessment of the UTT-30 by comparing IMU-derived metrics with those from a force-platform (FP), and (ii) to de-scribe how IMUs can be used to provide valid assessment of the movement dynamics/stability when performing a single UTT movement that is held for 5 s (UTT-stand). Twenty adults (26.2 ± 7.7 years) performed a UTT-30 and a UTT-stand on a force-platform with IMUs attached to each foot and the lumbar spine. We evaluate the agreement/association between IMU measures and measures de-termined from the FP. For UTT-30, IMU analysis of peaks in plantarflexion velocity and in FP’s centre of pressure (CoP) velocity was used to identify each repeated UTT movement and provided an objective means to discount any UTT movements that were not completed ‘fully’. UTT movements that were deemed to have not been completed ‘fully’ were those that yielded peak plantarflexion and CoP velocity values during the period of rising to up-on-the-toes that were below 1 SD of each participant’s mean peak rising velocity across their repeated UTT. The number of UTT movements detected by the IMU approach (23.5) agreed with the number determined by the FP (23.6), and each approach determined the same number of ‘fully’ completed movements (IMU, 19.9; FP, 19.7). For UTT-stand, IMU-derived movement dynamics/postural stability were moderately-to-strongly correlated with measures derived from the FP. Our findings highlight that the use of IMUs can provide valid assessment of UTT test(s).
    • Design and evaluation of an interactive quality dashboard for national clinical audit data: a realist evaluation

      Randell, Rebecca; Alvarado, Natasha; Elshehaly, Mai; McVey, Lynn; West, R.M.; Doherty, P.; Dowding, D.; Farrin, A.J.; Feltbower, R.G.; Gale, C.P.; et al. (2022-05)
      Background: National audits aim to reduce variations in quality by stimulating quality improvement. However, varying provider engagement with audit data means that this is not being realised. Aim: The aim of the study was to develop and evaluate a quality dashboard (i.e. QualDash) to support clinical teams’ and managers’ use of national audit data. Design: The study was a realist evaluation and biography of artefacts study. Setting: The study involved five NHS acute trusts. Methods and results: In phase 1, we developed a theory of national audits through interviews. Data use was supported by data access, audit staff skilled to produce data visualisations, data timeliness and quality, and the importance of perceived metrics. Data were mainly used by clinical teams. Organisational-level staff questioned the legitimacy of national audits. In phase 2, QualDash was co-designed and the QualDash theory was developed. QualDash provides interactive customisable visualisations to enable the exploration of relationships between variables. Locating QualDash on site servers gave users control of data upload frequency. In phase 3, we developed an adoption strategy through focus groups. ‘Champions’, awareness-raising through e-bulletins and demonstrations, and quick reference tools were agreed. In phase 4, we tested the QualDash theory using a mixed-methods evaluation. Constraints on use were metric configurations that did not match users’ expectations, affecting champions’ willingness to promote QualDash, and limited computing resources. Easy customisability supported use. The greatest use was where data use was previously constrained. In these contexts, report preparation time was reduced and efforts to improve data quality were supported, although the interrupted time series analysis did not show improved data quality. Twenty-three questionnaires were returned, revealing positive perceptions of ease of use and usefulness. In phase 5, the feasibility of conducting a cluster randomised controlled trial of QualDash was assessed. Interviews were undertaken to understand how QualDash could be revised to support a region-wide Gold Command. Requirements included multiple real-time data sources and functionality to help to identify priorities. Conclusions: Audits seeking to widen engagement may find the following strategies beneficial: involving a range of professional groups in choosing metrics; real-time reporting; presenting ‘headline’ metrics important to organisational-level staff; using routinely collected clinical data to populate data fields; and dashboards that help staff to explore and report audit data. Those designing dashboards may find it beneficial to include the following: ‘at a glance’ visualisation of key metrics; visualisations configured in line with existing visualisations that teams use, with clear labelling; functionality that supports the creation of reports and presentations; the ability to explore relationships between variables and drill down to look at subgroups; and low requirements for computing resources. Organisations introducing a dashboard may find the following strategies beneficial: clinical champion to promote use; testing with real data by audit staff; establishing routines for integrating use into work practices; involving audit staff in adoption activities; and allowing customisation. Limitations: The COVID-19 pandemic stopped phase 4 data collection, limiting our ability to further test and refine the QualDash theory. Questionnaire results should be treated with caution because of the small, possibly biased, sample. Control sites for the interrupted time series analysis were not possible because of research and development delays. One intervention site did not submit data. Limited uptake meant that assessing the impact on more measures was not appropriate. Future work: The extent to which national audit dashboards are used and the strategies national audits use to encourage uptake, a realist review of the impact of dashboards, and rigorous evaluations of the impact of dashboards and the effectiveness of adoption strategies should be explored. Study registration: This study is registered as ISRCTN18289782.