Psychology is (at least in the mainstream) an empirical science. What counts in the scientific psychological community and for most of the psychologists that have been educated scientifically are the results of empirical research. For example, many psychologists will sneer at psychoanalytic theory; however, they are much less likely to sneer at the results of psychoanalytic therapy, given that it is notoriously hard to demonstrate substantial differences in effectivity between the more established forms of therapy. [Note: I am not an expert in psychotherapy; there are differences in effectivity conditional on the kind of disorder; there are difference in effectivity conditional on the person to be therapized].
Another example, going further into basic research, is a study on executive functions by Miyake et al. (2000). Executive functions are those that enable us to maintain our acting on a goal set over time and to organize different kinds of behavior required to reach a goal; e.g. to shift our attention between the paper we have to write and the students that keep knocking at our door, or to refrain from ordering a pizza and instead stick to the salad. With their factor-analytic study of the empirical relations between different tasks presumed to measure executive functions, Miyake et al, (2000) single-handedly took over definitional authority of what counts as executive functions at least in the land of experimental psychologists, exemplified by the more than 600 citations their paper brings up in google scholar. Now, many experimental psychologists would agree that executive functions are comprised of “shifting, updating [working memory] and inhibition”; see e.g. a recent review on executive functions in preschool children. No theoretical classification of executive functions would ever have been so successful in psychology.
Isn’t that actually a good thing? Of course, empirical results really should count more than theory whenever there is no empirical evidence to support a theory. But the condition in italics is important. To forgo theory can seriously stall empirical success. My pet example is personality psychology: many psychologists are just so convinced that there are universal traits that can be observed in every single human being, and that there are only differences in degree but not in kind, i.e. every person can be assigned a value in extraversion. If you stay inside that approach and don’t ever think of changing to another perspective, you are going to have to get attuned to weak empirical associations between those trait scores and other criteria. Walter Mischel has famously criticised trait psychology for being unable to generate correlations with relevant criteria above r = .30 (which is not very high) in a book published in 1968, and things have not changed very much in the last 40 years — see e.g. Barrick, Mount and Judge’s meta-meta analysis on the relation of the Big Five to job performance.
Why researchers still hand on to their old theories and methods is somewhat mysterious. One reason might be that the results of many empirical-statistical methods are not unequivocal. For example, the disappointingly low correlations of personality trait measures with criteria such as job performance are often explained away with low reliabilities and range restriction. In essence this leads to conclusions like “if the measures had perfect reliability and if there was no range restriction, the ‘real’ correlation would be much higher than the one observed”. So instead of thinking about better ways to measure something or contemplating a theoretical change, people stick to their old suboptimal ways.