• A cognitive analytics management framework for the transformation of electronic government services from users perspective to create sustainable shared values

      Osman, I.H.; Anouze, A.L.; Irani, Zahir; Lee, H.; Medeni, T.D.; Weerakkody, Vishanth J.P. (2019-10-16)
      Electronic government services (e-services) involve the delivery of information and services to stakeholders via the Internet, Internet of Things and other traditional modes. Despite their beneficial values, the overall level of usage (take-up) remains relatively low compared to traditional modes. They are also challenging to evaluate due to behavioral, economical, political, and technical aspects. The literature lacks a methodology framework to guide the government transformation application to improve both internal processes of e-services and institutional transformation to advance relationships with stakeholders. This paper proposes a cognitive analytics management (CAM) framework to implement such transformations. The ambition is to increase users’ take-up rate and satisfaction, and create sustainable shared values through provision of improved e-services. The CAM framework uses cognition to understand and frame the transformation challenge into analytics terms. Analytics insights for improvements are generated using Data Envelopment Analysis (DEA). A classification and regression tree is then applied to DEA results to identify characteristics of satisfaction to advance relationships. The importance of senior management is highlighted for setting strategic goals and providing various executive supports. The CAM application for the transforming Turkish e-services is validated on a large sample data using online survey. The results are discussed; the outcomes and impacts are reported in terms of estimated savings of more than fifteen billion dollars over a ten-year period and increased usage of improved new e-services. We conclude with future research.
    • A comparative analysis of two-stage distress prediction models

      Mousavi, Mohammad M.; Quenniche, J.; Tone, K. (2019-04-01)
      On feature selection, as one of the critical steps to develop a distress prediction model (DPM), a variety of expert systems and machine learning approaches have analytically supported developers. Data envel- opment analysis (DEA) has provided this support by estimating the novel feature of managerial efficiency, which has frequently been used in recent two-stage DPMs. As key contributions, this study extends the application of expert system in credit scoring and distress prediction through applying diverse DEA mod- els to compute corporate market efficiency in addition to the prevailing managerial efficiency, and to estimate the decomposed measure of mix efficiency and investigate its contribution compared to Pure Technical Efficiency and Scale Efficiency in the performance of DPMs. Further, this paper provides a com- prehensive comparison between two-stage DPMs through estimating a variety of DEA efficiency measures in the first stage and employing static and dynamic classifiers in the second stage. Based on experimen- tal results, guidelines are provided to help practitioners develop two-stage DPMs; to be more specific, guidelines are provided to assist with the choice of the proper DEA models to use in the first stage, and the choice of the best corporate efficiency measures and classifiers to use in the second stage.
    • The curse of dimensionality of decision-making units: A simple approach to increase the discriminatory power of data envelopment analysis

      Vincent, Charles; Aparicio, J.; Zhu, J. (2019-12)
      Data envelopment analysis (DEA) is a technique for identifying the best practices of a given set of decision-making units (DMUs) whose performance is categorized by multiple performance metrics that are classified as inputs and outputs. Although DEA is regarded as non-parametric, the sample size can be an issue of great importance in determining the efficiency scores for the evaluated units, empirically, when the use of too many inputs and outputs may result in a significant number of DMUs being rated as efficient. In the DEA literature, empirical rules have been established to avoid too many DMUs being rated as efficient. These empirical thresholds relate the number of variables with the number of observations. When the number of DMUs is below the empirical threshold levels, the discriminatory power among the DMUs may weaken, which leads to the data set not being suitable to apply traditional DEA models. In the literature, the lack of discrimination is often referred to as the “curse of dimensionality”. To overcome this drawback, we provide a simple approach to increase the discriminatory power between efficient and inefficient DMUs using the well-known pure DEA model, which considers either inputs only or outputs only. Three real cases, namely printed circuit boards, Greek banks, and quality of life in Fortune’s best cities, have been discussed to illustrate the proposed approach.
    • Performance evaluation of bankruptcy prediction models: An orientation-free super-efficiency DEA-based framework

      Mousavi, Mohammad M.; Quenniche, J.; Xu, B. (2015-12)
      Prediction of corporate failure is one of the major activities in auditing firms risks and uncertainties. The design of reliable models to predict bankruptcy is crucial for many decision making processes. Although a large number of models have been designed to predict bankruptcy, the relative performance evaluation of competing prediction models remains an exercise that is unidimensional in nature, which often leads to reporting conflicting results. In this research, we overcome this methodological issue by proposing an orientation-free super-efficiency data envelopment analysis model as a multi-criteria assessment framework. Furthermore, we perform an exhaustive comparative analysis of the most popular bankruptcy modeling frameworks for UK data including our own models. In addition, we address two important research questions; namely, do some modeling frameworks perform better than others by design? and to what extent the choice and/or the design of explanatory variables and their nature affect the performance of modeling frameworks?, and report on our findings.
    • Satisficing data envelopment analysis: a Bayesian approach for peer mining in the banking sector

      Vincent, Charles; Tsolas, I.E.; Gherman, T. (2018-10)
      Over the past few decades, the banking sectors in Latin America have undergone rapid structural changes to improve the efficiency and resilience of their financial systems. The up-to-date literature shows that all the research studies conducted to analyze the above-mentioned efficiency are based on a deterministic data envelopment analysis (DEA) model or econometric frontier approach. Nevertheless, the deterministic DEA model suffers from a possible lack of statistical power, especially in a small sample. As such, the current research paper develops the technique of satisficing DEA to examine the still less explored case of Peru. We propose a Satisficing DEA model applied to 14 banks operating in Peru to evaluate the bank-level efficiency under a stochastic environment, which is free from any theoretical distributional assumption. The proposed model does not only report the bank efficiency, but also proposes a new framework for peer mining based on the Bayesian analysis and potential improvements with the bias-corrected and accelerated confidence interval. Our study is the first of its kind in the literature to perform a peer analysis based on a probabilistic approach.
    • Supporting better practice benchmarking: A DEA-ANN approach to bank branch performance assessment

      Tsolas, I.E.; Vincent, Charles; Gherman, T. (2020-12)
      The quest for best practices may lead to an increased risk of poor decision-making, especially when aiming to attain best practice levels reveals that efforts are beyond the organization’s present capabilities. This situation is commonly known as the “best practice trap”. Motivated by such observation, the purpose of the present paper is to develop a practical methodology to support better practice benchmarking, with an application to the banking sector. In this sense, we develop a two-stage hybrid model that employs Artificial Neural Network (ANN) via integration with Data Envelopment Analysis (DEA), which is used as a preprocessor, to investigate the ability of the DEA-ANN approach to classify the sampled branches of a Greek bank into predefined efficiency classes. ANN is integrated with a family of radial and non-radial DEA models. This combined approach effectively captures the information contained in the characteristics of the sampled branches, and subsequently demonstrates a satisfactory classification ability especially for the efficient branches. Our prediction results are presented using four performance measures (hit rates): percent success rate of classifying a bank branch’s performance exactly or within one class of its actual performance, as well as just one class above the actual class and just one class below the actual class. The proposed modeling approach integrates the DEA context with ANN and advances benchmarking practices to enhance the decision-making process for efficiency improvement.
    • A translation invariant pure DEA model

      Vincent, Charles; Färe, R.; Grosskopf, S. (2016-02-16)
      This communication complements the DEA model proposed by Lovell and Pastor (1999), by incorporating both positive and negative criteria in the model. As such, we propose a DEA model, known as pure DEA, using a directional distance function approach.
    • Value of the stochastic efficiency in data envelopment analysis

      Vincent, Charles (2017-09-15)
      This article examines the potential benefits of solving a stochastic DEA model over solving a deterministic DEA model. It demonstrates that wrong decisions could be made whenever a possible stochastic DEA problem is solved when the stochastic information is either unobserved or limited to a measure of central tendency. We propose two linear models: a semi-stochastic model where the inputs of the DMU of interest are treated as random while the inputs of the other DMUs are frozen at their expected values, and a stochastic model where the inputs of all of the DMUs are treated as random. These two models can be used with any empirical distribution in a Monte Carlo sampling approach. We also define the value of the stochastic efficiency (or semi-stochastic efficiency) and the expected value of the efficiency.