TY - JOUR
T1 - Multisource assessment for development purposes: Revisiting the methodology of data analysis
AU - Batista Foguet, Juan Manuel
AU - Boyatzis, Richard
AU - Saris, Willem Egbert
AU - Serlavós Serra, Ricard
AU - Velasco Moreno, Ferran
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Multisource assessment (MSA) is based on the belief that assessments are valid inferences about an individual's behavior. When used for performance management purposes, convergence of views among raters is important, and therefore testing factor invariance across raters is critical. However, when MSA is used for development purposes, raters usually come from a greater number of contexts, a fact that requires a
different data analysis approach. We revisit the MSA data analysis methodology when MSA is used for development, with the aim of improving its effectiveness. First, we argue that having raters from different contexts is an integral element of the assessment, with
the trait-context dyad being the actual latent variable. This leads to the specification of an Aggregate (instead of the usual Latent) multidimensional factor model. Second, since data analysis usually aggregates scores for each rater group into a single mean that is
then compared with the self-rating score, we propose that the test for factor invariance must also include scalar invariance, a pre-requisite for mean comparison. To illustrate this methodology we conducted a 360º survey on a sample of over 1100 MBA students enrolled in a leadership development course. Finally, by means of the study we show how the survey can be customized to each rater group to make the MSA process more effective.
AB - Multisource assessment (MSA) is based on the belief that assessments are valid inferences about an individual's behavior. When used for performance management purposes, convergence of views among raters is important, and therefore testing factor invariance across raters is critical. However, when MSA is used for development purposes, raters usually come from a greater number of contexts, a fact that requires a
different data analysis approach. We revisit the MSA data analysis methodology when MSA is used for development, with the aim of improving its effectiveness. First, we argue that having raters from different contexts is an integral element of the assessment, with
the trait-context dyad being the actual latent variable. This leads to the specification of an Aggregate (instead of the usual Latent) multidimensional factor model. Second, since data analysis usually aggregates scores for each rater group into a single mean that is
then compared with the self-rating score, we propose that the test for factor invariance must also include scalar invariance, a pre-requisite for mean comparison. To illustrate this methodology we conducted a 360º survey on a sample of over 1100 MBA students enrolled in a leadership development course. Finally, by means of the study we show how the survey can be customized to each rater group to make the MSA process more effective.
U2 - 10.3389/fpsyg.2018.02646
DO - 10.3389/fpsyg.2018.02646
M3 - Article
SN - 1664-1078
VL - 9
SP - 871
EP - 816
JO - Frontiers in Psychology
JF - Frontiers in Psychology
ER -