Engineering and Digital Technology: Recent submissions
Now showing items 1-20 of 2615
-
A fuzzy data-driven reliability analysis for risk assessment and decision making using Temporal Fault TreesFuzzy data-driven reliability analysis has been used in different safety-critical domains for risk assessment and decision-making where precise failure data is non-existent. Expert judgements and fuzzy set theory have been combined with different variants of fault trees as part of fuzzy data-driven reliability analysis studies. In such fuzzy fault tree analyses, different people represented failure data using different membership functions for the fuzzy set, and different parameters were set differently in the expert opinion elicitation process. Due to the availability of a wide variety of options, it is possible to obtain different outcomes when choosing one option over another. This article performed an analysis in the context of fuzzy data-based temporal fault tree analysis to investigate the effect of choosing different membership functions on the estimated system reliability and criticality ranking of different failure events. Moreover, the effect of using different values for the relaxation factor, a parameter set during the expert elicitation process, was studied on the system reliability and criticality evaluation. The experiments on the fuel distribution system case study show system reliability did not vary when triangular and trapezoidal fuzzy numbers were used with the same upper and lower bounds. However, it was seen that the criticality rankings of a couple of events were changed due to choosing different membership functions and different values of relaxation factor
-
Characterization and life cycle assessment of geopolymer mortars with masonry units and recycled concrete aggregates assorted from construction and demolition wasteDeveloping a fast, cost-effective, eco-friendly solution to recycle large amounts of construction and demolition waste (CDW) generated from construction industry-related activities and natural disasters is crucial. The present investigation aims to offer a solution for repurposing CDW into building materials suitable for accelerated construction and housing in developing countries and disaster-prone areas. Feasibility of recycled concrete aggregate (RCA) inclusion in geopolymer mortars constituted entirely from CDW (masonry elements) was investigated via an environmental impact-oriented approach by addressing the composition related key parameters. Mechanical performance was evaluated through compressive strength tests, and scanning electron microscope (SEM) imaging with line mapping analyses were carried out to monitor the interfacial transition zone (ITZ) properties. To investigate the environmental impacts of the geopolymer mortars and highlight the advantages over Portland cement-based mortars, a cradle-to-gate life cycle assessment (LCA) was performed. Findings revealed that roof tile (RT)-based geopolymer mortars mainly exhibited better strength performance due to their finer particle size. Mixtures activated with 15 M NaOH solution and cured at 105 °C achieved an average compressive strength above 55 MPa. RCA size was the most influential parameter on compressive strength, and a smaller maximum RCA size significantly increased the compressive strength. Microstructural analyses showed that the ITZ around smaller RCAs was relatively thinner, resulting in better compressive strength results. LCA proved that CDW-based geopolymer mortars provide the same compressive strength with around 60% less CO2 emissions and similar energy consumption compared to Portland cement-based mortars.
-
Transformative role of big data through enabling capability recognition in constructionBig data application is a significant transformative driver of change in the retail, health, engineering, and advanced manufacturing sectors. Big data studies in construction are still somewhat limited, although there is increasing interest in what big data application could achieve. Through interviews with construction professionals, this paper identifies the capabilities needed in construction firms to enable the accrual of the potentially transformative benefits of big data application in construction. Based on previous studies, big data application capabilities, needed to transform construction processes, focussed on data, people, technology, and organisation. However, the findings of this research suggest a critical modification to that focus to include knowledge and the organisational environment along with people, data, and technology. The research findings show that construction firms use big data with a combination strategy to enable transformation by (a) driving an in-house data management policy to rolling-out the big data capabilities; (b) fostering collaborative capabilities with external firms for resource development, and (c) outsourcing big data services to address the capabilities deficits impacting digital transformation.
-
Early diagnosis and personalised treatment focusing on synthetic data modelling: Novel visual learning approach in healthcareThe early diagnosis and personalised treatment of diseases are facilitated by machine learning. The quality of data has an impact on diagnosis because medical data are usually sparse, imbalanced, and contain irrelevant attributes, resulting in suboptimal diagnosis. To address the impacts of data challenges, improve resource allocation, and achieve better health outcomes, a novel visual learning approach is proposed. This study contributes to the visual learning approach by determining whether less or more synthetic data are required to improve the quality of a dataset, such as the number of observations and features, according to the intended personalised treatment and early diagnosis. In addition, numerous visualisation experiments are conducted, including using statistical characteristics, cumulative sums, histograms, correlation matrix, root mean square error, and principal component analysis in order to visualise both original and synthetic data to address the data challenges. Real medical datasets for cancer, heart disease, diabetes, cryotherapy and immunotherapy are selected as case studies. As a benchmark and point of classification comparison in terms of such as accuracy, sensitivity, and specificity, several models are implemented such as k-Nearest Neighbours and Random Forest. To simulate algorithm implementation and data, Generative Adversarial Network is used to create and manipulate synthetic data, whilst, Random Forest is implemented to classify the data. An amendable and adaptable system is constructed by combining Generative Adversarial Network and Random Forest models. The system model presents working steps, overview and flowchart. Experiments reveal that the majority of data-enhancement scenarios allow for the application of visual learning in the first stage of data analysis as a novel approach. To achieve meaningful adaptable synergy between appropriate quality data and optimal classification performance while maintaining statistical characteristics, visual learning provides researchers and practitioners with practical human-in-the-loop machine learning visualisation tools. Prior to implementing algorithms, the visual learning approach can be used to actualise early, and personalised diagnosis. For the immunotherapy data, the Random Forest performed best with precision, recall, f-measure, accuracy, sensitivity, and specificity of 81%, 82%, 81%, 88%, 95%, and 60%, as opposed to 91%, 96%, 93%, 93%, 96%, and 73% for synthetic data, respectively. Future studies might examine the optimal strategies to balance the quantity and quality of medical data.
-
An overview of safety and security analysis frameworks for the Internet of ThingsThe rapid progress of the Internet of Things (IoT) has continued to offer humanity numerous benefits, including many security and safety-critical applications. However, unlocking the full potential of IoT applications, especially in high-consequence domains, requires the assurance that IoT devices will not constitute risk hazards to the users or the environment. To design safe, secure, and reliable IoT systems, numerous frameworks have been proposed to analyse the safety and security, among other properties. This paper reviews some of the prominent classical and model-based system engineering (MBSE) approaches for IoT systems’ safety and security analysis. The review established that most analysis frameworks are based on classical manual approaches, which independently evaluate the two properties. The manual frameworks tend to inherit the natural limitations of informal system modelling, such as human error, a cumbersome processes, time consumption, and a lack of support for reusability. Model-based approaches have been incorporated into the safety and security analysis process to simplify the analysis process and improve the system design’s efficiency and manageability. Conversely, the existing MBSE safety and security analysis approaches in the IoT environment are still in their infancy. The limited number of proposed MBSE approaches have only considered limited and simple scenarios, which are yet to adequately evaluate the complex interactions between the two properties in the IoT domain. The findings of this survey are that the existing methods have not adequately addressed the analysis of safety/security interdependencies, detailed cyber security quantification analysis, and the unified treatment of safety and security properties. The existing classical and MBSE frameworks’ limitations obviously create gaps for a meaningful assessment of IoT dependability. To address some of the gaps, we proposed a possible research direction for developing a novel MBSE approach for the IoT domain’s safety and security coanalysis framework.
-
Optimal planning and operation of distribution systems using network reconfiguration and flexibility servicesThis paper proposes a novel approach for the reliability cost-based optimization of Distribution Systems (DS), considering tie line-based network reconfiguration method with integration of Distributed Energy Resources (DER). An optimal Energy not Supplied (ENS) index is proposed, where the capacity is handled by curtailment devices in the network such as sectionalizers and the energy supplied by DERs which considers Flexibility Services (FS) within a market environment. The decision variables include the investment and operation of tie-lines and buying regulation services from DER such as Distributed Generation (DG) and Battery Energy Storage Systems (BESS). The results validate the cost-effectiveness of the proposed method through implementation of these technologies to improve the reliability of the DS, within a comprehensive set of case-study scenarios for a 16-bus UK generic distribution system (UKGDS). The case study results indicate that significant savings can be achieved through the proposed method, ranging between 36%–71% depending on the level of automation in tie-line operations in combination with the settlement price for the power-balance of FS. This illustrates that the proposed DS reliability cost-based optimization method could have a significant impact for real world DG and BESS applications.
-
A knowledge-driven model to assess inherent safety in process infrastructureProcess safety has drawn increasing attention in recent years and has been investigated from different perspectives, such as quantitative risk analysis, consequence modeling, and regulations. However, rare attempts have been made to focus on inherent safety design assessment, despite being the most cost-effective safety tactic and its vital role in sustainable development and safe operation of process infrastructure. Accordingly, the present research proposed a knowledge-driven model to assess inherent safety in process infrastructure under uncertainty. We first developed a holistic taxonomy of contributing factors into inherent safety design considering chemical, reaction, process, equipment, human factors, and organizational concerns associated with process plants. Then, we used subject matter experts, content validity ratio (CVR), and content validity index (CVI) to validate the taxonomy and data collection tools. We then employed a fuzzy inference system and the Extent Analysis (EA) method for knowledge acquisition under uncertainty. We tested the proposed model on a steam methane-reforming plant that produces hydrogen as renewable energy. The findings revealed the most contributing factors and indicators to improve the inherent safety design in the studied plant and effectively support the decision-making process to assign proper safety countermeasures.
-
Poloxamer-based nanogels as delivery systems: how structural requirements can drive their biological performancePoloxamers or Pluronics®-based nanogels are one of the most used matrices for developing delivery systems. Due to their thermoresponsive and flexible mechanical properties, they allowed the incorporation of several molecules including drugs, biomacromolecules, lipid-derivatives, polymers, and metallic, polymeric, or lipid nanocarriers. The thermogelling mechanism is driven by micelles formation and their self-assembly as phase organizations (lamellar, hexagonal, cubic) in response to microenvironmental conditions such as temperature, osmolarity, and additives incorporated. Then, different biophysical techniques have been used for investigating those structural transitions from the mechanisms to the preferential component’s orientation and organization. Since the design of PL-based pharmaceutical formulations is driven by the choice of the polymer type, considering its physico-chemical properties, it is also relevant to highlight that factors inherent to the polymeric matrix can be strongly influenced by the presence of additives and how they are able to determine the nanogels biopharmaceuticals properties such as bioadhesion, drug loading, surface interaction behavior, dissolution, and release rate control. In this review, we discuss the general applicability of three of the main biophysical techniques used to characterize those systems, scattering techniques (small-angle X-ray and neutron scattering), rheology and Fourier transform infrared absorption spectroscopy (FTIR), connecting their supramolecular structure and insights for formulating effective therapeutic delivery systems.
-
Enhancement of precise underwater object localizationUnderwater communication applications extensively use localization services for object identification. Because of their significant impact on ocean exploration and monitoring, underwater wireless sensor networks (UWSN) are becoming increasingly popular, and acoustic communications have largely overtaken radio frequency (RF) broadcasts as the dominant means of communication. The two localization methods that are most frequently employed are those that estimate the angle of arrival (AOA) and the time difference of arrival (TDoA). The military and civilian sectors rely heavily on UWSN for object identification in the underwater environment. As a result, there is a need in UWSN for an accurate localization technique that accounts for dynamic nature of the underwater environment. Time and position data are the two key parameters to accurately define the position of an object. Moreover, due to climate change there is now a need to constrain energy consumption by UWSN to limit carbon emission to meet net-zero target by 2050. To meet these challenges, we have developed an efficient localization algorithm for determining an object position based on the angle and distance of arrival of beacon signals. We have considered the factors like sensor nodes not being in time sync with each other and the fact that the speed of sound varies in water. Our simulation results show that the proposed approach can achieve great localization accuracy while accounting for temporal synchronization inaccuracies. When compared to existing localization approaches, the mean estimation error (MEE) and energy consumption figures, the proposed approach outperforms them. The MEEs is shown to vary between 84.2154m and 93.8275m for four trials, 61.2256m and 92.7956m for eight trials, and 42.6584m and 119.5228m for twelve trials. Comparatively, the distance-based measurements show higher accuracy than the angle-based measurements.
-
Machine learning based small bowel video capsule endoscopy analysis: Challenges and opportunitiesVideo capsule endoscopy (VCE) is a revolutionary technology for the early diagnosis of gastric disorders. However, owing to the high redundancy and subtle manifestation of anomalies among thousands of frames, the manual construal of VCE videos requires considerable patience, focus, and time. The automatic analysis of these videos using computational methods is a challenge as the capsule is untamed in motion and captures frames inaptly. Several machine learning (ML) methods, including recent deep convolutional neural networks approaches, have been adopted after evaluating their potential of improving the VCE analysis. However, the clinical impact of these methods is yet to be investigated. This survey aimed to highlight the gaps between existing ML-based research methodologies and clinically significant rules recently established by gastroenterologists based on VCE. A framework for interpreting raw frames into contextually relevant frame-level findings and subsequently merging these findings with meta-data to obtain a disease-level diagnosis was formulated. Frame-level findings can be more intelligible for discriminative learning when organized in a taxonomical hierarchy. The proposed taxonomical hierarchy, which is formulated based on pathological and visual similarities, may yield better classification metrics by setting inference classes at a higher level than training classes. Mapping from the frame level to the disease level was structured in the form of a graph based on clinical relevance inspired by the recent international consensus developed by domain experts. Furthermore, existing methods for VCE summarization, classification, segmentation, detection, and localization were critically evaluated and compared based on aspects deemed significant by clinicians. Numerous studies pertain to single anomaly detection instead of a pragmatic approach in a clinical setting. The challenges and opportunities associated with VCE analysis were delineated. A focus on maximizing the discriminative power of features corresponding to various subtle lesions and anomalies may help cope with the diverse and mimicking nature of different VCE frames. Large multicenter datasets must be created to cope with data sparsity, bias, and class imbalance. Explainability, reliability, traceability, and transparency are important for an ML-based diagnostics system in a VCE. Existing ethical and legal bindings narrow the scope of possibilities where ML can potentially be leveraged in healthcare. Despite these limitations, ML based video capsule endoscopy will revolutionize clinical practice, aiding clinicians in rapid and accurate diagnosis.
-
The Effects of Vaporisation Models on the FCC Riser ReactorThis work presents a steady-state one-dimensional model of the FCC riser considering the vaporisation of the gas oil feed and subsequent cracking reactions. The evaporation of droplets is studied using three models: the classical homogeneous model and the heterogeneous vaporisation models from the literature. Droplets are modelled using the Lagrangian framework model for particles moving through a fluid. This was coupled with the gas–solid flow field describing the catalyst particulate transport in the riser. Cracking reaction kinetics are modelled using a four-lumped model. The model was then validated against published plant data. The model performed well in terms of gas oil conversion, gasoline yield, pressure drop, and phase temperature profiles. Therefore, it is suitable for use in the design and optimisation of new and existing FCC unit risers, particularly in cost–benefit analysis considering the current push away from petroleum energy sources. It was found that vaporisation models are largely insignificant in terms of gas oil conversion profiles and gasoline yield for usual operation conditions of FCC risers, which is a finding that had yet to be proven in the literature. Vaporisation models are shown to only affect conversion and yield when the initial droplet exceeds 2000 μm.
-
Simulation and optimisation of a medium scale reverse osmosis brackish water desalination system under variable feed quality: Energy saving and maintenance opportunityIn this work, we considered model-based simulation and optimisation of a medium scale brackish water desalination process. The mathematical model is validated using actual multistage RO plant data of Al- Hashemite University (Jordan). Using the validated model, the sensitivity of different operating parameters such as pump pressure, brackish water flow rate and seasonal water temperature (covering the whole year) on the performance indicators such as productivity, product salinity and specific energy consumption of the process is conducted. For a given feed flow rate and pump pressure, winter season produces less freshwater that in summer in line with the assumption that winter water demand is less than that in summer. With the soaring energy prices globally, any opportunity for the reduction of energy is not only desirable from the economic point of view but is an absolute necessity to meet the net zero carbon emission pledge by many nations, as globally most desalination plants use fossil fuel as the main source of energy. Therefore, the second part of this paper attempts to minimise the specific energy consumption of the RO system using model-based optimisation technique. The study resulted not only 19 % reduction in specific energy but also 4.46 % increase in productivity in a particular season of the year. For fixed product demand, this opens the opportunity for scheduling cleaning and maintenance of the RO process without having to consider full system shutdown.
-
Modeling and analysis of hybrid solar water desalination system for different scenarios in IndonesiaClean water demand has significantly increased due to the rise in the global population. However, most water on the Earth has high saline content that cannot be consumed directly; only about one over forty of the total water source is freshwater. Desalinated water is one of the potential solutions to meet the growing demand for freshwater, which is highly energy intensive. This paper analyses the energy, economic and environmental performance of a 5 m3/day PV (photovoltaic) powered reverse osmosis (RO) desalination system. Three scenarios of PV-RO with and without battery storage and diesel generator hybrid systems have been analyzed and investigated for the annual estimate load, net present value, and payback period of the water and electricity production costs. Also, the CO2 avoidance over the lifetime operation of all scearios is evaluated. This study shows that the PV-RO system without battery with 6.3 kW PV panels installed and with a 2-days water storage tank system is the most profitable economically f. For this scenario, the Levelized Cost of Electricity (LCOE), Levelized Cost of Water (LCOW), and Payback Period (PBP) are found to be $0.154/kWh, $0.627/m3, and five years, respectively. In addition, for this scenario, the CO2 emissions avoidance was the maximum (111,690 kg.CO2eq per year) compared to other scenarios.
-
Dynamic modelling and simulation of industrial scale multistage flash desalination processMultistage Flash (MSF) desalination process is still a dominant process, especially in the Gulf region, to produce high quality freshwater. Although there has been energy price surge in recent years, MSF process will continue to operate in that region for some foreseeable future. The key challenge is how to make such processes still profitable. Understanding the dynamics of any processes under uncertainty and disturbances is very important to make a process operationally feasible and profitable. The main aim of this work is to understand the dynamics of industrial scale MSF process using high fidelity and reliable process model. For this purpose, a detailed dynamic model for the MSF process incorporating key and new features is developed and validated against the actual data of a large-scale seawater desalination plant. The model is then used to study the behaviour of large scale MSF processes for disturbances in steam temperature, feed temperature and the recycle brine flow rate. The simulation results show that the last stage requires a longer time to settle compared to the preceding stages. In addition, steam temperature shows insignificant influence on the performance ratio compared to the inlet seawater temperature and recycle brine flow rate. Furthermore, it is found that the productivity of plant can increase in the winter compared to that in the summer. However, this benefit comes at the expense of increased steam consumption in the winter, resulting in a low performance ratio.
-
An Automated Approach to Instrumenting the Up-on-the-Toes Test(s)Normal ankle function provides a key contribution to everyday activities, particularly step/stair ascent and descent, where many falls occur. The rising to up-on-the-toes (UTT) 30 second test (UTT-30) is used in the clinical assessment of ankle muscle strength/function and endurance and is typically assessed by an observer counting the UTT movement completed. The aims of this study are: (i) to determine whether inertial measurement units (IMUs) provide valid assessment of the UTT-30 by comparing IMU-derived metrics with those from a force-platform (FP), and (ii) to de-scribe how IMUs can be used to provide valid assessment of the movement dynamics/stability when performing a single UTT movement that is held for 5 s (UTT-stand). Twenty adults (26.2 ± 7.7 years) performed a UTT-30 and a UTT-stand on a force-platform with IMUs attached to each foot and the lumbar spine. We evaluate the agreement/association between IMU measures and measures de-termined from the FP. For UTT-30, IMU analysis of peaks in plantarflexion velocity and in FP’s centre of pressure (CoP) velocity was used to identify each repeated UTT movement and provided an objective means to discount any UTT movements that were not completed ‘fully’. UTT movements that were deemed to have not been completed ‘fully’ were those that yielded peak plantarflexion and CoP velocity values during the period of rising to up-on-the-toes that were below 1 SD of each participant’s mean peak rising velocity across their repeated UTT. The number of UTT movements detected by the IMU approach (23.5) agreed with the number determined by the FP (23.6), and each approach determined the same number of ‘fully’ completed movements (IMU, 19.9; FP, 19.7). For UTT-stand, IMU-derived movement dynamics/postural stability were moderately-to-strongly correlated with measures derived from the FP. Our findings highlight that the use of IMUs can provide valid assessment of UTT test(s).
-
Design and evaluation of an interactive quality dashboard for national clinical audit data: a realist evaluationBackground: National audits aim to reduce variations in quality by stimulating quality improvement. However, varying provider engagement with audit data means that this is not being realised. Aim: The aim of the study was to develop and evaluate a quality dashboard (i.e. QualDash) to support clinical teams’ and managers’ use of national audit data. Design: The study was a realist evaluation and biography of artefacts study. Setting: The study involved five NHS acute trusts. Methods and results: In phase 1, we developed a theory of national audits through interviews. Data use was supported by data access, audit staff skilled to produce data visualisations, data timeliness and quality, and the importance of perceived metrics. Data were mainly used by clinical teams. Organisational-level staff questioned the legitimacy of national audits. In phase 2, QualDash was co-designed and the QualDash theory was developed. QualDash provides interactive customisable visualisations to enable the exploration of relationships between variables. Locating QualDash on site servers gave users control of data upload frequency. In phase 3, we developed an adoption strategy through focus groups. ‘Champions’, awareness-raising through e-bulletins and demonstrations, and quick reference tools were agreed. In phase 4, we tested the QualDash theory using a mixed-methods evaluation. Constraints on use were metric configurations that did not match users’ expectations, affecting champions’ willingness to promote QualDash, and limited computing resources. Easy customisability supported use. The greatest use was where data use was previously constrained. In these contexts, report preparation time was reduced and efforts to improve data quality were supported, although the interrupted time series analysis did not show improved data quality. Twenty-three questionnaires were returned, revealing positive perceptions of ease of use and usefulness. In phase 5, the feasibility of conducting a cluster randomised controlled trial of QualDash was assessed. Interviews were undertaken to understand how QualDash could be revised to support a region-wide Gold Command. Requirements included multiple real-time data sources and functionality to help to identify priorities. Conclusions: Audits seeking to widen engagement may find the following strategies beneficial: involving a range of professional groups in choosing metrics; real-time reporting; presenting ‘headline’ metrics important to organisational-level staff; using routinely collected clinical data to populate data fields; and dashboards that help staff to explore and report audit data. Those designing dashboards may find it beneficial to include the following: ‘at a glance’ visualisation of key metrics; visualisations configured in line with existing visualisations that teams use, with clear labelling; functionality that supports the creation of reports and presentations; the ability to explore relationships between variables and drill down to look at subgroups; and low requirements for computing resources. Organisations introducing a dashboard may find the following strategies beneficial: clinical champion to promote use; testing with real data by audit staff; establishing routines for integrating use into work practices; involving audit staff in adoption activities; and allowing customisation. Limitations: The COVID-19 pandemic stopped phase 4 data collection, limiting our ability to further test and refine the QualDash theory. Questionnaire results should be treated with caution because of the small, possibly biased, sample. Control sites for the interrupted time series analysis were not possible because of research and development delays. One intervention site did not submit data. Limited uptake meant that assessing the impact on more measures was not appropriate. Future work: The extent to which national audit dashboards are used and the strategies national audits use to encourage uptake, a realist review of the impact of dashboards, and rigorous evaluations of the impact of dashboards and the effectiveness of adoption strategies should be explored. Study registration: This study is registered as ISRCTN18289782.
-
Mechanical and thermal properties of lightweight concrete produced with polyester-coated pumice aggregateWith the technological advances in the field of building materials, there has been an increasing focus on the research of lightweight concrete made with coated aggregates for improving the durability of concrete. In this study, pumice aggregates were coated with cast-based polyester to obtain polymer-coated pumice aggregates (PCPA). Lightweight concretes were produced with different cement dosages (200, 250 and 300) and PCPAs at different ratios (0%, 50% and 100%). Physical properties, mechanical strength, thermal properties and internal structure analysis (SEM-EDS) of the produced concrete samples were performed. According to the RILEM functional classification of lightweight concrete, the test results showed that REF D300 and REF D250 dosage series are in the semi-load-bearing lightweight concrete class, and the other all series are in the insulation concrete class, and the produced concretes can be classified as lightweight insulation materials. It can also be used in non-load-bearing walls or as an alternative lightweight insulation material.
-
Using personalized avatars as an adjunct to an adult weight loss management program: randomized controlled feasibility studyObesity is a global public health concern. Interventions rely predominantly on managing dietary intake and increasing physical activity; however, sustained adherence to behavioral regimens is often poor. The lack of sustained motivation, self-efficacy, and poor adherence to behavioral regimens are recognized barriers to successful weight loss. Avatar-based interventions achieve better patient outcomes in the management of chronic conditions by promoting more active engagement. Virtual representations of self can affect real-world behavior, acting as a catalyst for sustained weight loss behavior. We evaluated whether a personalized avatar, offered as an adjunct to an established weight loss program, can increase participant motivation, sustain engagement, optimize service delivery, and improve participant health outcomes. A feasibility randomized design was used to determine the case for future development and evaluation of avatar-based technology in a randomized controlled trial. Participants were recruited from general practitioner referrals to a 12-week National Health Service weight improvement program. The main outcome measure was weight loss. Secondary outcome measures were quality-of-life and self-efficacy. Quantitative data were subjected to descriptive statistical tests and exploratory comparison between intervention and control arms. Feasibility and acceptability were assessed through interviews and analyzed using framework approach. Health Research Authority ethics approval was granted. Overall, 10 men (n=7, 70% for routine care and avatar and n=3, 30% for routine care) and 33 women (n=23, 70% for intervention and n=10, 30% for routine care) were recruited. Participants' initial mean weight was greater in the intervention arm than in the routine care arm (126.3 kg vs 122.9 kg); pattern of weight loss was similar across both arms of the study in T0 to T1 period but accelerated in T1 to T2 period for intervention participants, suggesting that access to the self-resembling avatar may promote greater engagement with weight loss initiatives in the short-to-medium term. Mean change in participants' weight from T0 to T2 was 4.5 kg (95% CI 2.7-6.3) in the routine care arm and 5.3 kg (95% CI 3.9-6.8) in the intervention arm. Quality-of-life and self-efficacy measures demonstrated greater improvement in the intervention arm at both T1 (105.5 for routine care arm and 99.7 for intervention arm) and T2 (100.1 for routine care arm and 81.2 for intervention arm). Overall, 13 participants (n=11, 85% women and n=2, 15% men) and two health care professionals were interviewed about their experience of using the avatar program. Participants found using the personalized avatar acceptable, and feedback reiterated that seeing a future self helped to reinforce motivation to change behavior. This feasibility study demonstrated that avatar-based technology may successfully promote engagement and motivation in weight loss programs, enabling participants to achieve greater weight loss gains and build self-confidence. ISRCTN Registry 17953876; https://doi.org/10.1186/ISRCTN17953876.
-
Computer modelling of compact 28/38 GHz dual-band antenna for millimeter-wave 5G applicationsA four-element compact dual-band patch antenna having a common ground plane operating at 28/38 GHz is proposed for millimeter-wave communication systems in this paper. The multiple-input-multiple-output (MIMO) antenna geometry consists of a slotted ellipse enclosed within a hollow circle which is orthogonally rotated with a connected partial ground at the back. The overall size of the four elements MIMO antenna is 2.24λ × 2.24λ (at 27.12 GHz). The prototype of four-element MIMO resonator is designed and printed using Rogers RT Duroid 5880 with εr = 2.2 and loss tangent = 0.0009 and having a thickness of 0.8 mm. It covers dual-band having a fractional bandwidth of 15.7% (27.12–31.34 GHz) and 4.2% (37.21–38.81 GHz) for millimeter-wave applications with a gain of more than 4 dBi at both bands. The proposed antenna analysis in terms of MIMO diversity parameters (Envelope Correlation Coefficient (ECC) and Diversity Gain (DG)) is also carried out. The experimental result in terms of reflection coefficient, radiation pattern, gain and MIMO diversity parameter correlates very well with the simulated ones that show the potential of the proposed design for MIMO applications at millimeter-wave frequencies.