• The curse of dimensionality of decision-making units: A simple approach to increase the discriminatory power of data envelopment analysis

      Vincent, Charles; Aparicio, J.; Zhu, J. (2019-12)
      Data envelopment analysis (DEA) is a technique for identifying the best practices of a given set of decision-making units (DMUs) whose performance is categorized by multiple performance metrics that are classified as inputs and outputs. Although DEA is regarded as non-parametric, the sample size can be an issue of great importance in determining the efficiency scores for the evaluated units, empirically, when the use of too many inputs and outputs may result in a significant number of DMUs being rated as efficient. In the DEA literature, empirical rules have been established to avoid too many DMUs being rated as efficient. These empirical thresholds relate the number of variables with the number of observations. When the number of DMUs is below the empirical threshold levels, the discriminatory power among the DMUs may weaken, which leads to the data set not being suitable to apply traditional DEA models. In the literature, the lack of discrimination is often referred to as the “curse of dimensionality”. To overcome this drawback, we provide a simple approach to increase the discriminatory power between efficient and inefficient DMUs using the well-known pure DEA model, which considers either inputs only or outputs only. Three real cases, namely printed circuit boards, Greek banks, and quality of life in Fortune’s best cities, have been discussed to illustrate the proposed approach.
    • An evolutionary Pentagon Support Vector finder method

      Mousavi, S.M.H.; Vincent, Charles; Gherman, T. (2020-07-15)
      In dealing with big data, we need effective algorithms; effectiveness that depends, among others, on the ability to remove outliers from the data set, especially when dealing with classification problems. To this aim, support vector finder algorithms have been created to save just the most important data in the data pool. Nevertheless, existing classification algorithms, such as Fuzzy C-Means (FCM), suffer from the drawback of setting the initial cluster centers imprecisely. In this paper, we avoid existing shortcomings and aim to find and remove unnecessary data in order to speed up the final classification task without losing vital samples and without harming final accuracy; in this sense, we present a unique approach for finding support vectors, named evolutionary Pentagon Support Vector (PSV) finder method. The originality of the current research lies in using geometrical computations and evolutionary algorithms to make a more effective system, which has the advantage of higher accuracy on some data sets. The proposed method is subsequently tested with seven benchmark data sets and the results are compared to those obtained from performing classification on the original data (classification before and after PSV) under the same conditions. The testing returned promising results.
    • Indices for the Betterment of the Public

      Vincent, Charles; Emrouznejad, A.; Johnson, M.P. (2020-01)
      Over the years, the quest for a better society has led to the birth of a variety of composite indices of development, from the gross domestic product to the happiness index. These indices usually integrate various social, cultural, psychological, and political aspects and are considered of vital importance for evaluating a country’s level of development and for assessing the impact of policy especially in the public sector. Overall, they consist of numerical measures that describe the well-being of both the individual and the society as a whole. This Special Issue on Indices for the Betterment of the Public of Socio-Economic Planning Sciences includes thirteen research articles by authors from Belgium, Colombia, Greece, India, Iran, Italy, Peru, Portugal, Spain, and the United Kingdom.
    • Measuring the efficiency of two stage network processes: a satisficing DEA approach

      Mehdizadeh, S.; Amirteimoori, A.; Vincent, Charles; Behzadi, M.H.; Kordrostami, S. (2021)
      Regular Network Data Envelopment Analysis (NDEA) models deal with evaluating the performance of a set of decision-making units (DMUs) with a two-stage construction in the context of a deterministic data set. In the real world, however, observations may display a stochastic behavior. To the best of our knowledge, despite the existing research done with different data types, studies on two-stage processes with stochastic data are still very limited. This paper proposes a two-stage network DEA model with stochastic data. The stochastic two-stage network DEA model is formulated based on the satisficing DEA models of chance-constrained programming and the leader-follower concepts. According to the probability distribution properties and under the assumption of the single random factor of the data, the probabilistic form of the model is transformed into its equivalent deterministic linear programming model. In addition, the relationship between the two stages as the leader and the follower, respectively, at different confidence levels and under different aspiration levels, is discussed. The proposed model is further applied to a real case concerning 16 commercial banks in China in order to confirm the applicability of the proposed approach at different confidence levels and under different aspiration levels.
    • Satisficing data envelopment analysis: a Bayesian approach for peer mining in the banking sector

      Vincent, Charles; Tsolas, I.E.; Gherman, T. (2018-10)
      Over the past few decades, the banking sectors in Latin America have undergone rapid structural changes to improve the efficiency and resilience of their financial systems. The up-to-date literature shows that all the research studies conducted to analyze the above-mentioned efficiency are based on a deterministic data envelopment analysis (DEA) model or econometric frontier approach. Nevertheless, the deterministic DEA model suffers from a possible lack of statistical power, especially in a small sample. As such, the current research paper develops the technique of satisficing DEA to examine the still less explored case of Peru. We propose a Satisficing DEA model applied to 14 banks operating in Peru to evaluate the bank-level efficiency under a stochastic environment, which is free from any theoretical distributional assumption. The proposed model does not only report the bank efficiency, but also proposes a new framework for peer mining based on the Bayesian analysis and potential improvements with the bias-corrected and accelerated confidence interval. Our study is the first of its kind in the literature to perform a peer analysis based on a probabilistic approach.
    • Supporting better practice benchmarking: A DEA-ANN approach to bank branch performance assessment

      Tsolas, I.E.; Vincent, Charles; Gherman, T. (2020-12)
      The quest for best practices may lead to an increased risk of poor decision-making, especially when aiming to attain best practice levels reveals that efforts are beyond the organization’s present capabilities. This situation is commonly known as the “best practice trap”. Motivated by such observation, the purpose of the present paper is to develop a practical methodology to support better practice benchmarking, with an application to the banking sector. In this sense, we develop a two-stage hybrid model that employs Artificial Neural Network (ANN) via integration with Data Envelopment Analysis (DEA), which is used as a preprocessor, to investigate the ability of the DEA-ANN approach to classify the sampled branches of a Greek bank into predefined efficiency classes. ANN is integrated with a family of radial and non-radial DEA models. This combined approach effectively captures the information contained in the characteristics of the sampled branches, and subsequently demonstrates a satisfactory classification ability especially for the efficient branches. Our prediction results are presented using four performance measures (hit rates): percent success rate of classifying a bank branch’s performance exactly or within one class of its actual performance, as well as just one class above the actual class and just one class below the actual class. The proposed modeling approach integrates the DEA context with ANN and advances benchmarking practices to enhance the decision-making process for efficiency improvement.
    • A translation invariant pure DEA model

      Vincent, Charles; Färe, R.; Grosskopf, S. (2016-02-16)
      This communication complements the DEA model proposed by Lovell and Pastor (1999), by incorporating both positive and negative criteria in the model. As such, we propose a DEA model, known as pure DEA, using a directional distance function approach.
    • Value of the stochastic efficiency in data envelopment analysis

      Vincent, Charles (2017-09-15)
      This article examines the potential benefits of solving a stochastic DEA model over solving a deterministic DEA model. It demonstrates that wrong decisions could be made whenever a possible stochastic DEA problem is solved when the stochastic information is either unobserved or limited to a measure of central tendency. We propose two linear models: a semi-stochastic model where the inputs of the DMU of interest are treated as random while the inputs of the other DMUs are frozen at their expected values, and a stochastic model where the inputs of all of the DMUs are treated as random. These two models can be used with any empirical distribution in a Monte Carlo sampling approach. We also define the value of the stochastic efficiency (or semi-stochastic efficiency) and the expected value of the efficiency.