Operating System Based Perceptual Evaluation of Call Quality in Radio Telecommunications Networks. Development of call quality assessment at mobile terminals using the Symbian operating system, comparison with traditional approaches and proposals for a tariff regime relating call charging to perceived speech quality.
SupervisorGardiner, John G.
KeywordSpeech quality measurement
Mobile cellular telecommunication networks
Non-dedicated and heterogeneous network
SM (signal meter)
BM (bandwidth meter)
The University of Bradford theses are licenced under a Creative Commons Licence.
InstitutionUniversity of Bradford
DepartmentSchool of Engineering, Design and Technology
MetadataShow full item record
AbstractCall quality has been crucial from the inception of telecommunication networks. Operators need to monitor call quality from the end-user¿s perspective, in order to retain subscribers and reduce subscriber ¿churn¿. Operators worry not only about call quality and interconnect revenue loss, but also about network connectivity issues in areas where mobile network gateways are prevalent. Bandwidth quality as experienced by the end-user is equally important in helping operators to reduce churn. The parameters that network operators use to improve call quality are mainly from the end-user¿s perspective. These parameters are usually ASR (answer seizure ratio), PDD (postdial delay), NER (network efficiency ratio), the number of calls for which these parameters have been analyzed and successful calls. Operators use these parameters to evaluate and optimize the network to meet their quality requirements. Analysis of speech quality is a major arena for research. Traditionally, users¿ perception of speech quality has been measured offline using subjective listening tests. Such tests are, however, slow, tedious and costly. An alternative method is therefore needed; one that can be automatically computed on the subscriber¿s handset, be available to the operator as well as to subscribers and, at the same time, provide results that are comparable with conventional subjective scores. QMeter® ¿ a set of tools for signal and bandwidth measurement that have been developed bearing in mind all the parameters that influence call and bandwidth quality experienced by the end-user ¿ addresses these issues and, additionally, facilitates dynamic tariff propositions which enhance the credibility of the operator. This research focuses on call quality parameters from the end-user¿s perspective. The call parameters used in the research are signal strength, successful call rate, normal drop call rate, and hand-over drop rate. Signal strength is measured for every five milliseconds of an active call and average signal strength is calculated for each successful call. The successful call rate, normal drop rate and hand-over drop rate are used to achieve a measurement of the overall call quality. Call quality with respect to bundles of 10 calls is proposed. An attempt is made to visualize these parameters for better understanding of where the quality is bad, good and excellent. This will help operators, as well as user groups, to measure quality and coverage. Operators boast about their bandwidth but in reality, to know the locations where speed has to be improved, they need a tool that can effectively measure speed from the end-user¿s perspective. BM (bandwidth meter), a tool developed as a part of this research, measures the average speed of data sessions and stores the information for analysis at different locations. To address issues of quality in the subscriber segment, this research proposes the varying of tariffs based on call and bandwidth quality. Call charging based on call quality as perceived by the end-user is proposed, both to satisfy subscribers and help operators to improve customer satisfaction and increase average revenue per user. Tariff redemption procedures are put forward for bundles of 10 calls and 10 data sessions. In addition to the varying of tariffs, quality escalation processes are proposed. Deploying such tools on selected or random samples of users will result in substantial improvement in user loyalty which, in turn, will bring operational and economic advantages.
Showing items related by title, author, creator and subject.
Information quality assessment in e-learning systems.Neagu, Daniel; Cullen, Andrea J.; Alkhattabi, Mona A. (University of BradfordInformatics Research Institute, School of Computing, Informatics and Media, 2011-04-15)E-learning systems provide a promising solution as an information exchanging channel. Improved technology could mean faster and easier access to information but does not necessarily ensure the quality of this information. Therefore it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations. Information quality frameworks are developed to measure the quality of information systems, generally from the designers¿ viewpoint. The recent proliferation of e-services, and e-learning particularly, raises the need for a new quality framework in the context of e-learning systems. The main contribution of this thesis is to propose a new information quality framework, with 14 information quality attributes grouped in three quality dimensions: intrinsic, contextual representation and accessibility. We report results based on original questionnaire data and factor analysis. Moreover, we validate the proposed framework using an empirical approach. We report our validation results on the basis of data collected from an original questionnaire and structural equation modeling (SEM) analysis, confirmatory factor analysis (CFA) in particular. However, it is difficult to measure information quality in an e-learning context because the concept of information quality is complex and it is expected that the measurements will be multidimensional in nature. Reliable measures need to be obtained in a systematic way, whilst considering the purpose of the measurement. Therefore, we start by adopting a Goal Question Metrics (GQM) approach to develop a set of quality metrics for the identified quality attributes within the proposed framework. We then define an assessment model and measurement scheme, based on a multi element analysis technique. The obtained results can be considered to be promising and positive, and revealed that the framework and assessment scheme could give good predictions for information quality within e-learning context. This research generates novel contributions as it proposes a solution to the problems raised from the absence of consensus regarding evaluation standards and methods for measuring information quality within an e-learning context. Also, it anticipates the feasibility of taking advantage of web mining techniques to automate the retrieval process of the information required for quality measurement. This assessment model is useful to e-learning systems designers, providers and users as it gives a comprehensive indication of the quality of information in such systems, and also facilitates the evaluation, allows comparisons and analysis of information quality.
Dementia Care Mapping (DCM): A Review of the research literatureBrooker, Dawn J.R. (2005)The published literature on dementia care mapping (DCM) in improving quality of life and quality of care through practice development and research dates back to 1993. The purpose of this review of the research literature is to answer some key questions about the nature of the tool and its efficacy, to inform the ongoing revision of the tool, and to set an agenda for future research. Design and Methods: The DCM bibliographic database at the University of Bradford in the United Kingdom contains all publications known on DCM (http://www.bradford.ac.uk/acad/health/dcm). This formed the basis of the review. Texts that specifically examined the efficacy of DCM or in which DCM was used as a main measure in the evaluation or research were reviewed. Results: Thirty-four papers were categorized into five main types: (a) cross-sectional surveys, (b) evaluations of interventions, (c) practice development evaluations, (d) multimethod evaluations, and (e) papers investigating the psychometric properties of DCM.
Index revisions, market quality and the cost of equity capital.Mazouz, Khelifa; Freeman, Mark C.; Aldaya, Wael H. (University of BradfordSchool of Management, 2013-11-20)This thesis examines the impact of FTSE 100 index revisions on the various aspects of stock market quality and the cost of equity capital. Our study spans over the period 1986¿2009. Our analyses indicate that the index membership enhances all aspects of liquidity, including trading continuity, trading cost and price impact. We also show that the liquidity premium and the cost of equity capital decrease significantly after additions, but do not exhibit any significant change following deletions. The finding that investment opportunities increases after additions, but do not decline following deletions suggests that the benefits of joining an index are likely to be permanent. This evidence is consistent with the investor awareness hypothesis view of Chen et al. (2004, 2006), which suggests that investors¿ awareness improve when a stock becomes a member of an index, but do not diminish after it is removal from the index. Finally, we report significant changes in the comovement of stock returns with the FTSE 100 index around the revision events. These changes are driven mainly by noise-related factors and partly by fundamental-related factors.