APPROPRIATENESS AND LIMITATIONS OF FACTOR ANALYSIS METHODS UTILISED IN PSYCHOLOGY

Structural modelling techniques and application of models that extract latent variables are recent predominant techniques in the applied multivariate statistical procedures in social sciences. We believe that correlation studies can provide adequate fi ndings if they are supported by logical analysis or causal modelling procedures. It is important to emphasize that factor analysis methods alone do not reveal the cause of covariability and that the fi nal result of factor analytical investigation depends, in part, on the decisions and interpretations of the researcher. The question of the minimum sample size in factor analysis, ambiguousness of results obtained by FA and mathematical problems in the use of FA is particularly scrupulously discussed in this paper. Furthermore, the effect of the factor analysis of data obtained from experiments on the scientifi c paradigm was analyzed, with emphasis on the current problems with its application in social sciences research. Neither method, including factor analysis, is suffi cient to answer all problem issues in the fi eld of psychology and kinesiology. Therefore, it is necessary to combine complementary methods within the research which will allow a more comprehensive analysis of the researched phenomena and a greater validity of empirical results. Finally, reducing theories in psychology to a psychometric method and theories in kinesiology to a kinesiometric method is an anomaly of numerous quantitative studies within these scientifi c disciplines, making identifi cation, instead of explanation of multi-causal nature of psychological and kinesiological phenomena, a primary focus of the research.


INTRODUCTION
Establishing relations between the phenomena and the cause of those phenomena, as well as defi ning the multicausal model of personality are the fundamental goals of personality psychology, while determining optimum anthropological patterns in a sport is one of the basic goals of sports kinesiology (Trninić, 1995).
Factor analysis has been used in numerous areas.For example, it is used in research studies deal-models with taxonomic models.These authors also state that taxonomic reasoning is not only empirically acceptable; it provides adequate results in the areas of its use.
In this context, Malacko and Popović (2001) emphasize that none of the theoretical structural models tested, based on the intercorrelation of some manifest variables cannot be accepted unless confi rmed by confi rmative factor analysis techniques.Suhr (2003) states that FA helps an investigator to determine the number of latent constructs underlying a set of items (variables), provides a means of explaining variation among variables (items) using a few newly created variables (factors), thus condensing information to defi ne the content or meaning of factors, that is, latent constructs.The author listed the following assumptions underlying scientifi c exactness of EFA: interval or ratio level of measurement; random sampling; relationship between observed variables is linear; a normal distribution (of each observed variable); a bivariate normal distribution (each pair of the observed variables), and multivariate normality.
Furthermore, Costello and Osborne (2005) see FA as a complex and multi-step process.Suhr (2003) indicates that CFA allows the researcher to test the hypothesis that a relationship between the observed variables and their underlying latent construct(s) exists.In doing so, the researcher uses knowledge of the theory, empirical research, or both, postulates the relationship pattern a priori and then tests the hypothesis statistically.
Within contemporary psychology and kinesiology FA methods have proved useful in all structures where numerous correlated variables occur simultaneously in a research and where the aim is to determine fundamental sources of covariance among the data.Observation of these interrelated phenomena is particularly important in various fi elds of psychology, sociology, pedagogy, political science, economics, anthropology and medicine but also in chemistry, pharmacology and other sciences.
In kinesiology of sport, for example, the scientifi c observation of motor behaviour of athletes may reveal that many of them achieve similar results in numerous different motor activities.Therefore, in motor activities such as jumps, throws, sprints and direction changes, it would be feasible to presume the existence of certain common factors which determine the intensity and successfulness of athlete's performance.These common factors which determine the successfulness of motor behaviour represent motor abilities.Motor abilities are hypothetical constructs, meaning they cannot be directly observed, and therefore, cannot be measured.Therefore, they can only be assessed indirectly, that is why they are called latent dimensions (Dizdar, 2006).According to Marsh (2007), all constructs in sport and exercise psychology are hypothetical constructs, and so must be validated using a construct validity approach.Construct validation is relevant to experimental as well as nonexperimental research and is useful in evaluating the researcher's interpretation of the experimental manipulation.
Pervin, Cervone and John (2005) consider factor analysis not quite unbiased.Furthermore, Cervone and Pervin (2008) state that this statistical procedure identifi es patterns of covariation in test responses but it does not answer the question of why the responses covary.It is the researcher, using his or her knowledge of psychology and kinesiology and relying on his or her theoretical beliefs, who infers the existence of some common entity (the factor) and interprets it.Different psychologists may make different interpretations.
Accordingly, Pervin, Cervone and John (2005) say the ultimate result of factor analytical research partially depends upon the decisions and interpretations of the researcher.Ergo, if the main goal of science is to determine laws of interrelations among the phenomena in nature and society as well as to investigate and conclude about their causes (Mejovšek, 2008) then FA, as a multivariate method, has a certain function in determining basic latent dimensions of a particular space.Reise, Weller and Comrey (2000) also state that the goal of factor extraction is to identify the number of latent dimensions (factors) needed to accurately account for the common variance among the items.Fulgosi (1984) argues that by FA many useful constructs have been identifi ed and many theories and models have been tested.Also, many wrong interpretations and convictions have been rejected, which, if it wasn't for FA, might have been trusted due to their new name, logical consistency, or similar include: determination (specifi cation of the model), assessment and establishing suitability of the model, and modifi cation and interpretation of the model (Milas, 2009).
In accordance with the above mentioned, Schutz and Gessaroli (1993) argued that CFA and SEM were the most important statistical tools for the 1990s, but that those techniques had seen almost no application in sport and exercise psychology.Marsh (2007) claimed that CFA/SEM should be the methodology of choice and recommended that sport and exercise psychologists make greater use of these techniques.
Furthermore, Marsh (2007) points out that despite this growing popularity, there appears to be an ever-widening gap between state-of-the-art methodological and statistical techniques that should be part of the repertoire of serious empirical researchers and the actual skill levels of many applied researchers.
The same author states that, because CFA/SEM has considerable fl exibility for addressing complex substantive issues in sport and exercise psychology, there is an increasing need for heuristic, nontechnical overviews of these techniques to assist potential users.
Also, Marsh (2007) suggests that there is a wide variety of multivariate quantitative analyses (CFA, SEM, causal modelling, multilevel analysis, longitudinal data analysis and multitrait-multimethod designs) to address substantive and theoretical issues from a construct validity perspective.
Finally, Marsh (2007) claims that CFA/SEM as statistical techniques provide new and interesting ways of resolving existing problems in sports psychology.The author also points out that SEM technology is more accessible than ever to applied researchers via modern software programs.Kuhn (1970Kuhn ( , 1974) ) noticed, especially in the area of social sciences, disagreement in the nature of legitimate scientifi c problems and methods, so he even used the term: normal science (1970,1999), inherently meaning science performing under the infl uence of the current, prevalent paradigm.When science can no longer explain aberrations, which are be-reasons.He also states that FA has been successfully used primarily in psychology: psychology of intelligence, personality, psychomotorics, physiological psychology, psychology of memory, social psychology, industrial psychology, educational psychology, and clinical psychology, as well as in other social and humanistic scientifi c fi elds.

FACTOR ANALYSIS AND SCIENTIFIC PARADIGMS
Quite opposite is the opinion of MacCallum and Tait (1986).They analysed 152 scientifi c articles, published in the Journal of Applied Psychology, Organizational Behavior and Human Performance and Personnel Psychology in the period 1975-1984.They concluded that the selection of FA methods was frequently inappropriate, by which basic concepts of research methods and scientifi c work had been jeopardized.Furthermore, comparison of results by intervals (1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)) made a minimal difference in the selections incurred.
We believe that, in the last decade or so, we can fi nd more recent methodological approaches as, for example, Structural Equation Modeling (SEM) techniques/methods, tending to be prevailing methodological tools used for analysis and extraction of relations and latent relations between variables in psychology and anthropology (Loehlin, 2003;Kline, 2005;Hox, & Roberts, 2010).
It is important to point out that SEM is one of the appropriate multivariate methods which enable testing the theory and determining causal relations between phenomena.
Furthermore, SEM can be defi ned as a set of methods used to hypothesize about arithmetic means, variances and covariances of empirically obtained data in terms of smaller number of "structural" parameters defi ned by a hypothetical model (Kaplan, 2000).SEM is based on CFA, regression analysis and other multivariate methods (Marsch, 2007;Mejovšek, 2008;Milas, 2009).Milas (2009) states that SEM is a broad and general statistical approach, used to verify assumptions about relations between observed and latent variables.It can be used to test quantitatively defi ned theoretical models and to establish their empirical sustainability (Milas, 2009).In addition, SEM can be used to determine the structure of variables which describe a certain construct, phenomena or process and defi ne their hypothetical causal relation.Steps in the implementation of SEM coming ever deeper through time, a crisis in science occurs followed by a resolving scientifi c revolution and by the establishment of a new paradigm, a new foundation of normal science.Gorsuch (1990Gorsuch ( , 1997) emphasized a reciprocal infl uence between research methods of standard use and currently accepted paradigm.Furthermore, the author noted that, under the infl uence of different scientifi c paradigms, the researcher will often choose a different method of factor analysis.Paradigms often infl uence what techniques are used, in addition to the logic of those techniques, and history affects paradigms.The same author also assumes that different paradigms and purposes will indeed lead to different factor analysis (Gorsuch, 1990(Gorsuch, , 1997)).
In accordance with the aforementioned, Mejovšek (2008) declared that in natural sciences one of the crucial arguments for the new paradigms acceptance and the already existent paradigms rejections was a greater quantitative exactness of the new paradigms and their greater predictive power or precision of future events.A researcher's approach to the investigation of certain aspects of particular scientifi c issues will depend on a paradigm he/she prefers.The author noticed that the greater exactness has been verifi ed and validated by means of FA methods, meaning FA is a tool for the validation of the existent paradigm.Also, with the development of science and scientifi c research intensively conducted in a particular area, the existing paradigms are evaluated and those paradigms are retained that can best meet the requirements of the scientifi c demands in revealing the laws by which phenomena in a particular area occur.
Therefore, Mejovšek (2003) states that the more complex a certain area is, the more probable is the existence of a larger number of paradigms.This is the evident situation in psychology and kinesiology, in the opinion of the authors of this paper.
Methodology used to identify the structure of personality traits, factor analysis, is often challenged for not having a universally-recognised basis for choosing among solutions with different numbers of factors.Thus for example, a fi ve factor solution depends to some degree on the interpretation of the analyst.This has led to disputes about the "true" number of factors (Pervin, Cevone & John, 2005).In psychology, for example, no unique theory of personality exists.There is no unique theory of motor abilities in kinesiology either.It can be concluded that FA generates the whole semantic continuum and a researcher interprets and infers.Fornell (1987, in Bucik, 1990) states that multivariate methods "of the second generation" (redundancy analysis, covariate structures analysis, confi rmatory multidimensional scaling) are more rigorous than those "of fi rst generation" (MANOVA, ANCOVA, multiple regression,...) regarding establishing theoretical starting points, so they are verifi ed by confi rmatory techniques (Vodopivec, 1988).
For Pervin, Cervone, and John (2005) personality traits are hierarchically organized.Momirović and associates (1987) explain that the hypothetic model of motor abilities is also of a hierarchical type, that is, it consists of one general factor and lower structures of lower-order factors.Such a concept of human characteristics and abilities can be a foundation for a scientifi c theory.In psychology, for example, the hierarchical standpoint, established by Eysenck (1980aEysenck ( , 1980b)), suggests that at the simplest level behaviour may be observed through specifi c responses (the so called specifi c response level).Some of these responses are becoming associated and form more general habits (the so called habitual response level).At a higher level of hierarchical organization different personality traits are associated and form higher-order factors or super-factors (Eysenck, 1970(Eysenck, , 1990)).
The second most infl uential psychologist of the 20 th century, Cattell (1965) preferred to work with a large number of factors in research of personality traits.As opposed to him, Eysenck and Eysenck (2005) used FA to combine personality traits into a smaller number of uncorrelated superfactors, which cover a broader range of behaviour.
The major difference between Eysenck and Cattel is that Cattell prefers to work with larger number of factors at the trait level, which have a more narrow defi nition but tend to correlate with each other.In contrast, Eysenck uses secondary FA to combine traits into a smaller number of superfactors, which cover a broader range of behaviour and tend to be uncorrelated.
Cudeck and O'Dell (1994) reviewed the literature on this subject and provided further developments for obtaining standard errors of rotated factor loadings.Application of these methods would demonstrate that standard errors decrease as N increases.In addition, Archer and Jennrich (1976) showed this effect in the Monte Carlo study.Although this effect is well defi ned theoretically and has been demonstrated with simulations, there is no guidance available to indicate how large N must be to obtain adequately small standard errors of loadings.A detailed study of this question would be diffi cult because, as emphasized by Cudeck and O'Dell (1994), these standard errors depend in a complex way on many things other than sample size, including method of rotation, number of factors, and the degree of correlation among the factors.Furthermore, a general method for obtaining standard errors for rotated loadings has been developed (Browne, & Cudeck, 1997).
MacCallum, et al. (1999) state that the factor analysis literature includes a range of recommendations regarding the minimum sample size necessary to obtain factor solutions that are adequately stable and that correspond closely to population factors.Also, a fundamental misconception about this issue is that the minimum sample size, or the minimum ratio of sample size to the number of variables, is invariant across studies.Furthermore, the same authors claim that, in fact, necessary sample size is dependent on several aspects of any given study, including the level of communality of the variables and the level of overdetermination of the factors.In conclusion, the authors present a theoretical and mathematical framework that provides a basis for understanding and predicting these effects.The hypothesized effects are verifi ed by a sampling study using artifi cial data.Results demonstrate the lack of validity of common rules of thumb and provide a basis for establishing guidelines for sample size in FA.
Accordingly, Marsh, et al. (1998), for the number of indicators (p) per factor (p/f) in CFA, by varying sample size (N=50-1000) and p/f (2-12 item per factor) in 35,000 Monte Carlo simulations, conclude that researchers should consider more indicators per factor than is evident in current practice.
MacCalum, et al. (1999), however, also claim, regarding sample size, that N is in fact highly depen-

SAMPLE SIZE IN FA
In studies treating FA, considerable attention has been paid to the problem of sample size.Typical suggestions embrace information on the minimum sample sizes and the minimum ratio between sample size (N) and the number of manifest variables (p), that is, the minimum ratio N : p.
Analysis of problems of sample size was interpreted by Kline (1979) who recommended that N should be at least 100, while different authors argued that N should be at least 200, and Cattell (1978) claimed the minimum desirable N to be 250.Comrey and Lee (1992) urged researchers to obtain samples of 500 or more observations whenever possible in factor analytic studies.
Tinsley and Tinsley (1987) also reported a problem of sample size in FA: the sample of up to 100 entities is poor, up to 200 entities fair, up to 300 entities good, up to 500 entities very good, and up to 1000 entities excellent.
Considering recommendations for the N : p ratio, Cattell (1978) believed it should be in the range of 3 to 6. Gorsuch (1983) argued for a minimum ratio of 5. Everitt (1975) recommended that the N : p ratio should be at least 10.MacCallum, et al. (1999) commented the wide range in these recommendations.Furthermore, the same authors have noticed that the inconsistency in the recommendations can probably be attributed partly to a relatively small amount of explicit evidence or support provided for any of them.Most of them seem to be general guidelines developed from substantial experience on the part of their supporters.Some authors (e.g., Comrey, & Lee, 1992) placed the sample size question into the context of the need to make standard errors of correlation coeffi cients adequately small so that following FA of those correlations would yield stable solutions.
There are some research fi ndings that are relevant to the sample size question.There is considerable literature, for instance, on the topic of standard errors in factor loadings.As sample size increases, the variability in factor loadings across repeated samples will decrease.Formulas for estimating standard errors of factor loadings have been developed for various types of unrelated loadings (Girshick, 1939;Jennrich, 1974;Lawley, & Maxwell, 1971) and rotated loadings (Archer, & Jennrich, 1973;Jennrich, 1973Jennrich, , 1974)).dent on several specifi c aspects of a given study.Under certain conditions, a relatively small sample may be entirely adequate, whereas under other conditions, a very large sample may be inadequate.So, when discussing sample size, one must identify aspects of a study that infl uence the necessary sample size.This is fi rst done in a theoretical framework and is then verifi ed and investigated further with simulated data.On the basis of these fi ndings MacCalum, et al. (1999) provided guidelines to estimate the necessary sample size in an empirical study.The theoretical framework they have presented provides a foundation for the following hypotheses about effects of sample size in FA.
We will cite their three crucial points: "1.As N increases, sampling error will be reduced, and sample FA solutions will be more stable and will more accurately recover the true population structure.
2. Quality of FA solutions will improve as communalities increase.In addition, as communalities increase, the infl uence of sample size on quality of solutions will decline.When communalities are all high, sample size will have a relatively little impact on quality of solutions, meaning that accurate recovery of population solutions may be obtained using a fairly small sample.However, when communalities are low, the role of sample size becomes much more important and will have a greater impact on quality of solutions.
3. Quality of FA solutions will improve as overdetermination of factors improves.This effect will be reduced as communalities increase and may also interact with sample size."(MacCallum, et al., 1999) In their Monte Carlo study (MacCallum, et al., 1999), they retained the known correct number of factors in the analysis of each sample.Thus it was found that excellent recovery of population factors could be achieved with small samples under conditions of high communalities and optimal overdetermination of factors.However, an open question remains whether analysis of sample data under such conditions would consistently yield a correct decision about the number of factors."We expect that this would, in fact, be the case simply because it would seem contradictory to fi nd the number of factors to be highly ambiguous but recovery of population factors to be very good if the correct number were retained.Nevertheless, if this were not the case, our results regarding recovery of population factors under such conditions might be overly optimistic because of potential diffi culty in identifying the proper number of factors to retain.One other limitation of our Monte Carlo design involves the limited range of levels of N, p and r that were investigated.Strictly speaking, one must be cautious in extrapolating our results beyond the ranges studied.However, it must be recognized that our fi ndings were supported by a formal theoretical framework that was not subject to a restricted range of these aspects of design, thus lending credence to the notion that the observed trends are probably valid beyond the conditions investigated in the Monte Carlo study.Our approach to studying the sample size question in factor analysis has focused on the particular objective of obtaining solutions that are adequately stable and congruent with population factors.A sample size that is suffi cient to achieve that objective might be somewhat different from one that is suffi cient to achieve some other objective, such as a specifi ed level of power for a test of model fi t (Mac-Callum, Browne, & Sugawara, 1996) or standard errors of factor loading -that are adequately small by some defi nition.An interesting topic for future study would be the consistency of sample sizes required to meet these various objectives." It is impossible to derive a minimum sample size that is appropriate in all situations (MacCallum, & Tucker, 1991).By using theoretical arguments and empirical evidence, these authors demonstrated that the minimum sample size needed to accurately recover a population factor pattern is a function of several variables including the variables-to-factor-ratio, the average communality of the variables, and the degree to which the factors are overdetermined (defi ned, in part, by the number of variables that load on each factor).Reise, Waller, and Comrey (2000) assert when communalities are high (>.6) and the factors are well defi ned (have many large loadings), sample sizes of 100 are often adequate.However, when communalities are low (e.g., when analyzing items), the number of factors is large and the number of indicators per factor is small, even a sample size of 500 subjects may not be adequate.
In addition to sample size, the second issue that warrants consideration is sample heterogeneity.In terms of identifying replicable factors, researchers should assemble samples with suffi cient examinee representation at all levels of the trait dimensions (in order to accurately estimate the population item intercorrelations).One consequence of this rule is that using the standard pool of undergraduates may be suitable when undergraduates manifest suffi cient heterogeneity with respect to trait standing.On some constructs, such as extraversion or agreeableness, this seems reasonable.For other constructs, however, such as uni-polar depression or psychotic ideation, undergraduates may not be an appropriate respondent pool to accurately map the factor space of clinical assessment scales.
Costello and Osborne (2005) state that the researchers using large samples and making informed choices from the options available for data analysis are the ones most likely to accomplish their goal: to come to conclusions that will generalize beyond a particular sample to either another sample or to the population (or a population) of interest.To do less is to arrive at conclusions that are unlikely to be of any use or interest beyond that sample and that analysis.Comrey (1978) and Reise, Weller and Comrey (2000) indicate a danger if too few factors are extracted -a researcher may miss important distinctions among the items, and the subsequently rotated solution may be distorted in non-systematic ways.However, if too many dimensions are retained, some rotated factors may be ill defi ned with only one or two salient loadings.Comrey (1967) suggests an analytic rotation strategy specifi cally designed to address these issues.Yet, although there are many rules of thumb and statistical indices for addressing the dimensionality issue in EFA, no procedure seems entirely satisfactory.Fava and Velicer (1992a) as well as Wood, Tataryn and Gorsuch (1996) empirically verify the effects of under -and overextraction on the factor recovery of known population structures.They generally agree that it is preferable to extract too many factors rather than too few.For instance, on the basis of the highly ambitious Monte Carlo study, Tataryn and Gorsuch (1996) (p.354) recently concluded that "(a) when underextraction occurs, the estimated factors are likely to contain considerable error; [and] (b) when overextraction occurs, the estimated loadings for the true factors usually contain substantially less error than in the case of underextraction."

OBTAINED BY FA
Cattell (Pervin, Cervone, & John, 2008) claims that if multivariate, factor analytic research is at all apt to determine the basic personality structure, then the same factors or personality traits should be obtained from all the three types of data: from life information, from the questionnaire data and from the objective test data.Eysenck (1991) also defi nes a criterion as a possibility to replicate results.That external measure is essential and condition sine qua non for taxonomic paradigm of a personality description system.Any problem connected with replicability automatically excludes that system from any further consideration.
The fact that different criteria produce different solutions, that is, proclaims different number of principal components to be signifi cant is probably the greatest issue of FA.For example, the Guttman-Kaiser's criterion shows tendency to hyperfactorization -too many principal components become signifi cant; whereas the PB criterion tends to hypofactorization -too few principal components become signifi cant (Viskić-Štalec, 1991).If FA is so powerful mathematical-statistical procedure, as suggested by its advocates, then the same factors should be found in equivalent research (Pervin, Cervone, & John, 2005).Personality psychology researchers claim the fi ve-factor model to be the fundamental fi nding and those fi ve factors is "just the right number of factors" (McCrae, & John, 1992).Other researchers insist that less than fi ve factors are enough (Eysenck, & Eysenck, 1985;Eysenck, 1983Eysenck, , 1991Eysenck, , 1993;;Zuckerman, 1990).Quite opposite, Benet and Waller (1995), Buss (1988), Cattell (1990), Tellegen (1993) suggest fi ve factors that generate manifest personality structure are not suffi cient to describe personality.
Despite the consensus in relation to the Bigfi ve model, there is a debate considering its status (Goldberg, & Saulcier, 1995;Pervin, 1994;Eysenck, 1991Eysenck, , 1993)).Pervin and John (1997) have noticed that many critics suggest the level of congruence among the conclusions of different studies has been less than ideal.
A number of critics suggest that the degree of correspondence among the fi ndings from different studies has been less than ideal.In the words of one supporter of trait theory, "the resemblance is more fraternal than identical" (Briggs, 1989, p. 248).In sum, we must question whether FA will provide the basic units of personality).Opposite to the prior attitudes, for Eysenck (1985), FA is the only and the best procedure for the detection of either basic or latent personality traits.As Bucik, Boben and Kranjc (1997) have asserted FA was in the past a powerful, effective mathematical method tool for the determination of contents of the personality constructs structure.However, even among the researchers who agree mostly about the methodological approach, there are still certain disagreements regarding the number of factors describing the personality structure.
Furthermore, Mlačić and Knezović (1997) report about two directions or currents of researchers in the interpretation of factors obtained by FA.Their opinion is that Cattell's position is most realistic: he explicitly equates the obtained factors with the neuro-psychological structures generating behavioural patterns.On the other end of the FA interpreters are researchers who utilize FA as a substitution for cluster analysis and do not make any presumption that the obtained factors may have any signifi cance outside the investigated group of data (Goldberg, & Digman, 1994).

Mathematical problems with FA methods application
It must be noted that essential, mathematical problems occurring in application of FA methods are seldom discussed among kinesiologists.The authors of the present article reason that arithmetic restrictions of computer, which generates numerical problems, and limitations of mathematical statistic apparatus in general are restrictions that should be studied more thoroughly even in kinesiology and psychology.Also, it should be emphasized here that the generation of exact, differentially weighted (pondered) linear combination in practical research is an exception and that in the fi nal computer arithmetic most data are approximations of a small relative error.Furthermore, standard criteria aiming at assessing quality of certain methods of FA are based on the amount of the explained variance of particular factors.Our reasoning is that the further evaluation of FA methods should be based upon the detection of the distribution of test-statistics behaviour of certain criteria.A proposed approach might incite the usage of FA methods based upon a certain level of significance, that is, as statistical methods.Bartlett (1950) has developed his famous test for the statistical signifi cance of a correlation matrix, that is, the test of equivalency of all population's roots.Furthermore, Lužar (1983Lužar ( , 1984) ) claims that signifi cance tests generate too large number of components, therefore, a large number of criteria is developed, based upon non-statistical foundations and motivated by various logic reasoning.Based on the simulated experiment, the authoress suggests it is sensible to use only three criteria: GK (Guttman, & Kaiser, 1956), PB (Štalec, & Momirović, 1971) and CH (Momirović, & Zakrajšek, 1972).

CONCLUSION
The next issue in FA application is determining the level of correlation among the factors.Furthermore, the orthogonal solutions only defi ne the dimensionality of latent space, and oblique solutions, under conditions of simple structure, can mean more variance interpreted.It is clear that uniform coverage of manifest space enables more appropriate, yet still subjective explanations.
Despite the fact that FA is one of the main multivariate analyses, used both in theoretical and applied psychological and kinesiological research, there are serious disputes regarding its appropriateness and limitations.The arguments are listed as follows: • The reality of existence of latent, initially hypothetic factors being responsible for a number of correlations among the observed variables.The factors are a probable mathematical-statistical referent framework, but it does not mean that the determined latent dimensions actually exist in the reality.
Sample size should be emphasized again, especially in its relation to population size for which we are detecting a latent structure.In FA researchers should meet both the conditions for the signifi cance of multiple correlations and be very prudent when selecting sample size.Many authors recommend a sample to be larger than the twice or triple the number of variables.This recommendation is not theoretically justifi ed and it has shown to be unsuitable in practice.
From the application point of view, FA can be statistical technique or method only when hypotheses are defi ned very precisely covering homogeneously the whole investigated space.On the other hand, explorative FA techniques are mathematical methods used to reduce space to a fewer dimensions which should allow researchers optimal utilization of other mathematical and statistical methods.It can be concluded that in current kinesiology too much significance is given to EFA in comparison with CFA.
Furthermore, theory, assessment and quantitative research of personality in sports are insuffi cient for answering numerous issues and problems in sport science, relating the explanation of internal and external determinants of athlete's performance and sports achievement.This is due to reducing theories in psychology to a psychometric method and theories in kinesiology to a kinesiometric method which is an anomaly of numerous quantitative studies, focusing on identifi cation of the obtained results, instead of explanation of mechanisms underlying psychological and kinesiological phenomena.We believe that integrating different complementary approaches to research (e.g.nomothetic and idiographic, quantitative and qualitative) enhances the ability of comprehension, enables greater validity of research fi ndings as well as their practical applicability.Finally, psychologists and kinesiologists are in need of science of discourse and science of intervention which is oriented towards the processes of change.Recommendations of most authors that sample should be twice or three times the number of variables is not theoretically justifi ed, and practically they have been proven inadequate.
Researchers on different populations of athletes using different measuring instruments obtain very different fi ndings.However, it is crucial whether particular extracted factors get high level of empirical support or not.After all the obtained factors have been interpreted, the fi nal verifi cation is performed based on the checking factors' intercorrelations.If correlations among the factors deviate from the expected, then we can doubt about the correctness of factors' interpretation and the interpretation procedure should be repeated.

•
Incorrect application of FA methods may lead to wrong conclusions.• Known structures of factors are frequently not corroborated by empirical research, that is, the same factors have not been found in different empirical factorial fi ndings.• Insuffi cient defi nition of factors -expert and research teams in a specifi c fi eld should be careful to meaningfully and coherently denote (name) and explain the obtained factors; a confl ict between exactness and practicality may occur.• The existence of many various rotations (because there are no quantitative indices about quality of an individual rotations), which generates semantic continuum of possibilities.Certain rotations are weighted based on the impression that they are more important than the others.A researcher chooses appropriate structure based on her/ his subjective assessment.In doing so, her/his goal is to achieve as sensible interpretation of the extracted factors as possible.• Sample size in FA -it depends primarily on the specifi c issues treated in a particular research study; therefore, it can hardly be generalized.