Operating System Based Perceptual Evaluation of Call Quality in Radio Telecommunications Networks. Development of call quality assessment at mobile terminals using the Symbian operating system, comparison with traditional approaches and proposals for a tariff regime relating call charging to perceived speech quality.
SupervisorGardiner, John G.
KeywordSpeech quality measurement
Mobile cellular telecommunication networks
Non-dedicated and heterogeneous network
SM (signal meter)
BM (bandwidth meter)
The University of Bradford theses are licenced under a Creative Commons Licence.
InstitutionUniversity of Bradford
DepartmentSchool of Engineering, Design and Technology
MetadataShow full item record
AbstractCall quality has been crucial from the inception of telecommunication networks. Operators need to monitor call quality from the end-user¿s perspective, in order to retain subscribers and reduce subscriber ¿churn¿. Operators worry not only about call quality and interconnect revenue loss, but also about network connectivity issues in areas where mobile network gateways are prevalent. Bandwidth quality as experienced by the end-user is equally important in helping operators to reduce churn. The parameters that network operators use to improve call quality are mainly from the end-user¿s perspective. These parameters are usually ASR (answer seizure ratio), PDD (postdial delay), NER (network efficiency ratio), the number of calls for which these parameters have been analyzed and successful calls. Operators use these parameters to evaluate and optimize the network to meet their quality requirements. Analysis of speech quality is a major arena for research. Traditionally, users¿ perception of speech quality has been measured offline using subjective listening tests. Such tests are, however, slow, tedious and costly. An alternative method is therefore needed; one that can be automatically computed on the subscriber¿s handset, be available to the operator as well as to subscribers and, at the same time, provide results that are comparable with conventional subjective scores. QMeter® ¿ a set of tools for signal and bandwidth measurement that have been developed bearing in mind all the parameters that influence call and bandwidth quality experienced by the end-user ¿ addresses these issues and, additionally, facilitates dynamic tariff propositions which enhance the credibility of the operator. This research focuses on call quality parameters from the end-user¿s perspective. The call parameters used in the research are signal strength, successful call rate, normal drop call rate, and hand-over drop rate. Signal strength is measured for every five milliseconds of an active call and average signal strength is calculated for each successful call. The successful call rate, normal drop rate and hand-over drop rate are used to achieve a measurement of the overall call quality. Call quality with respect to bundles of 10 calls is proposed. An attempt is made to visualize these parameters for better understanding of where the quality is bad, good and excellent. This will help operators, as well as user groups, to measure quality and coverage. Operators boast about their bandwidth but in reality, to know the locations where speed has to be improved, they need a tool that can effectively measure speed from the end-user¿s perspective. BM (bandwidth meter), a tool developed as a part of this research, measures the average speed of data sessions and stores the information for analysis at different locations. To address issues of quality in the subscriber segment, this research proposes the varying of tariffs based on call and bandwidth quality. Call charging based on call quality as perceived by the end-user is proposed, both to satisfy subscribers and help operators to improve customer satisfaction and increase average revenue per user. Tariff redemption procedures are put forward for bundles of 10 calls and 10 data sessions. In addition to the varying of tariffs, quality escalation processes are proposed. Deploying such tools on selected or random samples of users will result in substantial improvement in user loyalty which, in turn, will bring operational and economic advantages.
Showing items related by title, author, creator and subject.
Information quality assessment in e-learning systems.Neagu, Daniel; Cullen, Andrea J.; Alkhattabi, Mona A. (University of BradfordInformatics Research Institute, School of Computing, Informatics and Media, 2011-04-15)E-learning systems provide a promising solution as an information exchanging channel. Improved technology could mean faster and easier access to information but does not necessarily ensure the quality of this information. Therefore it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations. Information quality frameworks are developed to measure the quality of information systems, generally from the designers¿ viewpoint. The recent proliferation of e-services, and e-learning particularly, raises the need for a new quality framework in the context of e-learning systems. The main contribution of this thesis is to propose a new information quality framework, with 14 information quality attributes grouped in three quality dimensions: intrinsic, contextual representation and accessibility. We report results based on original questionnaire data and factor analysis. Moreover, we validate the proposed framework using an empirical approach. We report our validation results on the basis of data collected from an original questionnaire and structural equation modeling (SEM) analysis, confirmatory factor analysis (CFA) in particular. However, it is difficult to measure information quality in an e-learning context because the concept of information quality is complex and it is expected that the measurements will be multidimensional in nature. Reliable measures need to be obtained in a systematic way, whilst considering the purpose of the measurement. Therefore, we start by adopting a Goal Question Metrics (GQM) approach to develop a set of quality metrics for the identified quality attributes within the proposed framework. We then define an assessment model and measurement scheme, based on a multi element analysis technique. The obtained results can be considered to be promising and positive, and revealed that the framework and assessment scheme could give good predictions for information quality within e-learning context. This research generates novel contributions as it proposes a solution to the problems raised from the absence of consensus regarding evaluation standards and methods for measuring information quality within an e-learning context. Also, it anticipates the feasibility of taking advantage of web mining techniques to automate the retrieval process of the information required for quality measurement. This assessment model is useful to e-learning systems designers, providers and users as it gives a comprehensive indication of the quality of information in such systems, and also facilitates the evaluation, allows comparisons and analysis of information quality.
Citizens' continuous use of eGovernment services: The role of self-efficacy, outcome expectations and satisfactionAlruwaie, M.; El-Haddadeh, R.; Weerakkody, Vishanth J.P.; Ismagilova, Elvira (2020-07)The continuous use of eGovernment services is a de facto for its prosperity and success. A generalised sense of citizens' self-efficacy, expectations, and satisfaction offer opportunities for governments to further retain needed engagements. This study examines the factors influencing citizens' continuance use of eGovernment services. Through the integration of Social Cognitive Theory, Expectation Confirmation Theory, DeLone and McLean IS success model, and E-S-QUAL, a survey of 471 citizens in the UK, engaging in online public services, found that prior experience, social influence, information quality, and service quality, personal outcome expectation, and satisfaction, are significant predictors of citizens' intention to use eGovernment, when they are regulated, through citizens' self-efficacy. The present study extends the roles of pre-adoption and post-adoption by offering a self-regulating process. Therefore, it demonstrates how critical it is for the government's leaders to understand the patterns of the long-term process for electronic systems continually.
Exploring the potential for secondary uses of Dementia Care Mapping (DCM) data for improving the quality of dementia careKhalid, Shehla; Surr, Claire A.; Neagu, Daniel; Small, Neil A. (2019-04)The reuse of existing datasets to identify mechanisms for improving healthcare quality has been widely encouraged. There has been limited application within dementia care. Dementia Care Mapping (DCM) is an observational tool in widespread use, predominantly to assess and improve quality of care in single organisations. DCM data has the potential to be used for secondary purposes to improve quality of care. However, its suitability for such use requires careful evaluation. This study conducted in-depth interviews with 29 DCM users to identify issues, concerns and challenges regarding the secondary use of DCM data. Data was analysed using modified Grounded Theory. Major themes identified included the need to collect complimentary contextual data in addition to DCM data, to reassure users regarding ethical issues associated with storage and reuse of care related data and the need to assess and specify data quality for any data that might be available for secondary analysis.