the three mentioned factors prove to form a
constraint processing speed ability. It is indicated by
the parameters showing that the model is fit.
The parameters for the model accuracy test also
show that the model can explain the roles of all three
factors. RMSEA value of 0.000 proves that the
model is very fit. Similarly, the value of Chi Square
(χ²) whose magnitude is 0.341 with a significance
level (p) of 0.559 (greater than 0.05) indicates that
the index of accuracy meets the criteria. This means
that the model can describe a well-measured factor.
Thus, based on a trial that aims to estimate the
validity of internal structure, the research shows that
the prepared processing speed ability test has good
internal validity. This is evidenced from the index of
model accuracy that all meet the expected range of
values.
The researcher wanted to know why the
temporary model fit of the parameter estimate was
not significant. To answer this question, the
researcher did post hoc to know the magnitude of
statistical power. With a sample size of 135,
RMSEA of 0.000, alpha of 0.05 and df equal to 1,
the statistical power obtained was 5%. This
statistical power was very small, and this result at
once answered the question why the temporary
model fit parameter of each narrow ability was not
significant. With a sample size of 135, RMSEA of
0.000, alpha of 0.05 and df equal to 1, the statistical
power obtained was 5%. This statistical power was
very small, and this result at once answered the
question why the model was suitable while the
parameters of each narrow ability was not
significant. Simulation was also done to prove that
to obtain statistical power of 80% the required
sample was at least 78,490 people. Meanwhile, due
to the gender effect on processing speed ability, the
result showed that there was no difference between
men and women. This could be seen from the level
of significance (p) gender for processing speed
ability of 0.135 (see Table 1.3). This fact showed
that there was no influence or different ability of a
person in terms of processing speed ability if viewed
from the aspect of gender.
Results of the validity test found that all three
factors have a fit model in measuring speed
processing ability. However, some weaknesses are
identified, e.g. the statistical power which is weak
(only about 5% due to insufficient sample size). As
mentioned earlier, to obtain a stronger statistical
power (about 80%) the required sample was at least
78,490 people. To test a theoretical model, it should
include all indicators/sub-tests/narrow abilities as the
larger project of the study. Research on the planned
intelligence test has 12 sub-tests, so the degree of
freedom is 78. If assuming RMSEA is 0.04, and
wanting to obtain 80% statistical power, then the
required sample is 293.
Moreover, for test developers interested in the
same field, it is recommended to conduct further
research to enrich the validity of this test through
different validity sources, such as validity based on
criteria with other variables. And for practitioners,
this test can be used to measure speed processing
abilities. However, it is necessary to be careful in
interpreting the test results, as these tests are new so
there is little evidence available regarding the
validity of this test. Another future challenge is to
develop norms from larger groups that could be used
to interpret test results more accurately. The related
norms to interpretation are required for diagnostic
purposes, so that the existence of norms becomes
absolute if the test is to be used in a practical field.
REFERENCES
Anastasi, A. and Urbina, S., 1997. Psychological Testing.
7th Editio ed. New York: Prentice Hall.
Beaujean, A., 2015. John Carroll’s Views on Intelligence:
Bi-Factor vs. Higher-Order Models. Journal of
Intelligence, 3(4), pp.121–136.
Carroll, J.B., 1993. Human Cognitive Abilities: A Survey
of Factor-Analytic Studies. Educational Researcher.
New York: Cambridge University Press.
Flanagan, D.P. and Harrison, P.L., 2005a. Contemporary
intellectual assessment. The Guilford Press.
Flanagan, D.P. and Harrison, P.L., 2005b. Contemporary
Intellectual Assessment. the Guilford Press, .
Floyd, R.G., Shands, E.I., Rafael, F.A., Bergeron, R. and
Mcgrew, K.S., 2009. Intelligence The dependability of
general-factor loadings : The effects of factor-
extraction methods , test battery composition , test
battery size , and their interactions. Intelligence, 37(5),
pp.453–465.
Furnham, A. and Mansi, A., 2014. The self-assessment of
the Cattell – Horn – Carroll broad stratum abilities.
Learning and Individual Differences, 32, pp.233–237.
Gottfredson, L., 2009. Intelligence : Foundations and
Issues in Assessment. 50(3), pp.183–195.
JASP Team, 2018. JASP.
Kaufman, S.B., Reynolds, M.R., Liu, X., Kaufman, A.S.
and McGrew, K.S., 2012. Are cognitive g and
academic achievement g one and the same g? An
exploration on the Woodcock-Johnson and Kaufman
tests. Intelligence, 40(2), pp.123–138.
Lohman, D.F. and Gambrell, J.L., 2012. Journal of
Psychoeducational Assessment.
Lohman, D.F., Korb, K.A., Lakin, J.M., Korb, K.A. and
Lakin, J.M., 2008. Gifted Child Quarterly Identifying
Academically Gifted English- A Comparison of the