Multisource assessment for development purposes: Revisiting the methodology of data analysis

Juan Manuel Batista Foguet, Richard Boyatzis, Willem Egbert Saris, Ricard Serlavós Serra, Ferran Velasco Moreno

Research output: Indexed journal article Articlepeer-review

4 Citations (Scopus)

Abstract

Multisource assessment (MSA) is based on the belief that assessments are valid inferences about an individual's behavior. When used for performance management purposes, convergence of views among raters is important, and therefore testing factor invariance across raters is critical. However, when MSA is used for development purposes, raters usually come from a greater number of contexts, a fact that requires a different data analysis approach. We revisit the MSA data analysis methodology when MSA is used for development, with the aim of improving its effectiveness. First, we argue that having raters from different contexts is an integral element of the assessment, with the trait-context dyad being the actual latent variable. This leads to the specification of an Aggregate (instead of the usual Latent) multidimensional factor model. Second, since data analysis usually aggregates scores for each rater group into a single mean that is then compared with the self-rating score, we propose that the test for factor invariance must also include scalar invariance, a pre-requisite for mean comparison. To illustrate this methodology we conducted a 360º survey on a sample of over 1100 MBA students enrolled in a leadership development course. Finally, by means of the study we show how the survey can be customized to each rater group to make the MSA process more effective.
Original languageEnglish
Pages (from-to)871-16
JournalFrontiers in Psychology
Volume9
DOIs
Publication statusPublished - 1 Jan 2019

Fingerprint

Dive into the research topics of 'Multisource assessment for development purposes: Revisiting the methodology of data analysis'. Together they form a unique fingerprint.

Cite this