• Applicant perspectives during selection: a review addressing "so what?," " what's new?." and "where to next?"

      McCarthy, J.M.; Bauer, T.N.; Truxillo, D.M.; Anderson, Neil; Costa, Ana-Cristina; Ahmed, S.M. (2017-07-01)
      We provide a comprehensive but critical review of research on applicant reactions to selection procedures published since 2000 (n = 145), when the last major review article on applicant reactions appeared in the Journal of Management. We start by addressing the main criticisms levied against the field to determine whether applicant reactions matter to individuals and employers (“So what?”). This is followed by a consideration of “What’s new?” by conducting a comprehensive and detailed review of applicant reaction research centered upon four areas of growth: expansion of the theoretical lens, incorporation of new technology in the selection arena, internationalization of applicant reactions research, and emerging boundary conditions. Our final section focuses on “Where to next?” and offers an updated and integrated conceptual model of applicant reactions, four key challenges, and eight specific future research questions. Our conclusion is that the field demonstrates stronger research designs, with studies incorporating greater control, broader constructs, and multiple time points. There is also solid evidence that applicant reactions have significant and meaningful effects on attitudes, intentions, and behaviors. At the same time, we identify some remaining gaps in the literature and a number of critical questions that remain to be explored, particularly in light of technological and societal changes.
    • Personnel Selection in the Digital Age: A Review of Validity and Applicant Reactions, and Future Research Challenges

      Woods, S.A.; Ahmed, S.; Nikolaou, I.; Costa, Ana-Cristina; Anderson, Neil (Taylor francis Group, 2019)
      We present a targeted review of recent developments and advances in digital selection procedures (DSPs) with particular attention to advances in internet-based techniques. By reviewing the emergence of DSPs in selection research and practice, we highlight five main categories of methods (online applications, online psychometric testing, digital interviews, gamified assessment and social media). We discuss the evidence base for each of these DSP groups, focusing on construct and criterion validity, and applicant reactions to their use in organizations. Based on the findings of our review, we present a critique of the evidence base for DSPs in industrial, work and organizational psychology and set out an agenda for advancing research. We identify pressing gaps in our understanding of DSPs, and ten key questions to be answered. Given that DSPs are likely to depart further from traditional nondigital selection procedures in the future, a theme in this agenda is the need to establish a distinct and specific literature on DSPs, and to do so at a pace that reflects the speed of the underlying technological advancement. In concluding, we, therefore, issue a call to action for selection researchers in work and organizational psychology to commence a new and rigorous multidisciplinary programme of scientific study of DSPs.
    • Team Trust

      Costa, Ana-Cristina; Anderson, Neil (Willey-Blackwell, 2017-03-23)
      This chapter seeks to clarify the definition of trust and its conceptualization specifically at the team or workgroup level, as well as discussing the similarities and differences between interpersonal and team level trust. Research on interpersonal trust has shown that individual perceptions of others trustworthiness and their willingness to engage in trusting behavior when interacting with them are largely history‐dependent processes. Thus, trust between two or more interdependent individuals develops as a function of their cumulative interaction. The chapter describes a multilevel framework with individual, team and organizational level determinants and outcomes of team trust. It aims to clarify core variables and processes underlying team trust and to develop a better understanding of how these phenomena operate in a system involving the individual team members, the team self and the organizational contexts in which the team operates. The chapter concludes by reviewing and proposing a number of directions for future research and future‐oriented methodological recommendations.
    • Trust in work teams: an integrative review, multilevel model, and future directions

      Costa, Ana-Cristina; Fulmer, C.A.; Anderson, Neil (2018-02)
      This article presents an integrative review of the rapidly growing body of research on trust in work teams. We start by analyzing prominent definitions of trust and their theoretical foundations, followed by different conceptualizations of trust in teams emphasizing its multilevel, dynamic, and emergent nature. We then review the empirical research and its underlying theoretical perspectives concerning the emergence and development of trust in teams. Based on this review, we propose an integrated conceptual framework that organizes the field and can advance knowledge of the multilevel nature of trust in teams. Our conclusion is that trust in teams resides at multiple levels of analysis simultaneously, is subject to factors across levels in organizations, and impacts performance and other relevant outcomes both at the individual and team levels. We argue that research should not only differentiate between interpersonal trust between members from collective trust at the team level, but also emphasize the interplay within and between these levels by considering cross-level influences and dynamics. We conclude by proposing four major directions for future research and three critical methodological recommendations for study designs derived from our review and framework.
    • Validity of interpretation: a user validity perspective beyond the test score

      MacIver, R.; Anderson, Neil; Costa, Ana-Cristina; Evers, A. (2014-06)
      This paper introduces the concept of user validity and provides a new perspective on the validity of interpretations from tests. Test interpretation is based on outputs such as test scores, profiles, reports, spread-sheets of multiple candidates’ scores, etc. The user validity perspective focuses on the interpretations a test user makes given the purpose of the test and the information provided in the test output. This innovative perspective focuses on how user validity can be extended to content, criterion and to some extent construct-related validity. It provides a basis for researching the validity of interpretations and an improved understanding of the appropriateness of different approaches to score interpretation, as well as how to design test outputs and assessments which are pragmatic and optimal.