Posted by: wolf | June 27, 2008

Hands-on assessment

I just listened to a talk about the potential benefits of computerized psychological assessment. Besides simplifying data managment and scoring, there are some things that can (almost) only be done with a computer, like adaptive testing or reaction time measurement — the overwhelming majority of studies in cognitive psychology depends on reaction time measures. But I also experienced a slight discomfort in listening to that talk. The speaker gave an example of the kind of research I react allergically to in psychological assessment: He devised a test which he called “visual comprehension” (in German it was actually “Seh-Verständnis” which is not quite identical with “visual comprehension”, but I can’t come up with a better translation). Taking that test, you will be shown short educational videos (4-5 minutes) for the natural sciences; there’s a speaker explaining, and what he is talking about is illustrated by the visual content. Afterwards you will be asked several questions about the content of the video.

So far no problem. But what I don’t like is that this is called “visual comprehension”. Performance in such a task will depend only slightly on visual abilities (as opposed to auditory, say). You can only watch the video once and you won’t be able to re-view portions of the video when you have to answer the questions. So I would say much of the performance depends on memory systems; how much you are able to memorize things while watching, and how good you are at recalling things when confronted with the question. In any textbook on learning and memory you will find that performance in such tasks depends greatly on previous subject knowledge: whenever you already know something about the subject at hand, it will facilitate encoding, because you already know what to look for, and retrieval, which can be explained by associationist models of memory, where specific nodes will be more easily activated the more they are connected to other nodes and the more such connections have been active in the past.

So how much does this test of “visual comprehension” differ from more traditional tests of learning abilities, as for example tests of “reading comprehension”, which have been included in international assessment programmes such as PISA (there called “reading literacy”)? Not much, you might have guessed already, and that is also what the speaker found out. However, it would obviously not be valid to say “visual comprehension is highly correlated with reading comprehension”, because it is just stupid to talk of “visual comprehension” for the kind of test used. But it is a general problem in psychological assessment that people design a test, then give it a name, and then make claims such as “[insert test name] correlates highly with [insert construct of choice]!”.

Sometimes you really should think before you act, or devise some clever test.

Posted by: wolf | June 19, 2008

Why should there not be phenomenal experiences?

Instead of just making bad jokes about zombie arguments, I have now begun reading David Chalmer’s book “the conscious mind“, and will probably be posting about consciousness and such (Chalmers says there is no really good definition, and that seems to be too true). One thing that nags me about the zombie argument becomes already apparent in the introduction to the book: Chalmers states that it is intuitive that there is no need for phenomenal experience; that means he says even if we “feel” something when e.g. we sit at our computers and write blog posts it wouldn’t be necessary that we feel something. He seems to claim (I am still in the first chapter) that all we can do as human beings (thinking, acting) could also be done without experiencing anything. That is the first premise of the zombie argument: That there could be beings just like humans in all respects with the exception that these beings, the zombies, don’t feel anything.

I do not see how this idea should be intuitive. Actually, I think it is really quite contrived. Maybe, if one really tries hard, one might think of a robot that can react to its environment and maybe even initiate actions itself without meaningful experiences. But as human beings, we act and react in most instances particularly because we have inner experiences; that is, I go to a concert because I want to feel good at that concert.

Chalmers (I adress this to Chalmers, but there are other zombists) would answer that there would still be no need for subjective experience. Feeling good might be just some kind of biological¹ reward function: i.e. he might state thateven if something is in some way “good”, say biologically or physically, so that some biological function might have been gotten installed during evolution, it would not be necessary that this function is connected with the subjective experience of feeling good.

I don’t find this very compelling, even if I don’t have a strong argument against it yet. Why should something that is good not feel good? Why should phenomenal experience not be produced by or even identical to the mechanisms for the reward function? I do not find it intuitive to dissociate subjective experience from biological function. Why should it not be that subjective experience itself is a biological function? After all, some kind of conscious experience seems necessary at least for some things we humans are able to do. When you try to remember something, say your grandmother, most people would say they are able to produce an inner image of how their grandmother looks like, that is, they will say that they can somehow see their grandmother even if she is not physically around; of course such an imagination will not be the same as the real seeing of the grandmother. And even more to the point, the whole act of imagining a zombie’s characteristics needs some mental operations. How could it be that these mental operations should not be there in some way? How could it be that a human being should not be aware of her or his mental operations? Chalmers seems to postulate that there is something in addition going on to the mental operations. That is in no way intuitive.

Why am I going on so much about intuitiveness? Chalmers puts a lot of emphasis on the point that zombies, physically¹ like us in every respect, are intuitively conceivable. As yet it seems to me that conceivability is just a way of saying something is not strongly counterintuitive, and the zombie argument rests on the assumption of conceivability. There is nothing of empirical evidence in the zombie discussion. The whole argument is, to say it derisively, out of the armchair. I am however not against armchair argumentation. But the argument better be not only intuitive, but also comprehensible. I don’t really buy an argument when somebody can not even explain the premises. To repeat: I have only just started reading Chalmers works, maybe I am not getting things right. So I am very curious about the next chapters.

¹ Daniel Dennett has noted that it is especially the zombists and other dualists, i.e. people that think consciousness can not be explained in terms of the body and brain, who talk about physical laws instead of taking into account that biology, neuroanatomy and -physiology have something to add beyond “pure” physics.

Posted by: wolf | June 13, 2008

nobody expects the…

statistical graphs quality enforcement agency.

See also here.

Posted by: wolf | June 13, 2008

Zombies and Dinosaurs

www.qwantz.com is the definite source for scientists. Ryan North has already added to the debates on methodology, and today he put up another prudent remark on the mind-body-problem.

Talking about zombies, I had my own stab at the issue.am I a zombie, or am I David Chalmers? That\'s the question.

I wonder if David Chalmers has some dinosaurs in his secret below-the-sea chambers.

Posted by: wolf | June 11, 2008

how to find that slope

Posted by: wolf | June 10, 2008

Teh REview Prcess

When I submitted my first paper to a peer-reviewed journal, I was all anxious not only about the quality of the empirical methods and results, but also about language and style, even more since I am not a native speaker. Granted — being a non-native speaker actually might be to my credit since reviewers will probably be less rigorous once they realize where I come from. But believe me, I would take great pains with the stylistic quality of a paper just as well if I were to submit it to a journal published in German (my native tongue).

Now that I have been asked to review other’s manuscripts for scientific journals, I have to realize that other authors don’t always seem to be as thorough and diligent as I thought everyone in the scientific community would be. Before I submit a paper, I carefully go through it several times and I give it to colleagues or friends, in order to at least exclude the most obvious misspellings and wrong grammar, but I also ask the others to check the consistency and comprehensibility of the text. Today I was asked to review a short paper for a German journal in educational psychology, and that paper surely can’t have gone to an internal review process as described above. There were at least four spelling errors on each page, even one in the title, and above that, whole passages were barely understandable because of inconsistently used terms; for example, they wrote of a “correlation between course observation and competency gains”, which seems to imply that observing a course leads to an increase in competency, when they actually meant the correlation between some characteristic of the course that was observed and competency gains. And that wasn’t the only truly bad paper I had on my desk. In another paper, the authors were not only very careless with spelling and grammar, they also didn’t care to collect new data! Even if it is often beneficial and revealing to re-analyze data, in that case there weren’t any really new conclusions, just a difficult-to-interpret mumble-jumble of “looks as if interesting to pursue in further analyses, but not interesting enough for ourselves, the great re-analyzers of previously collected data”.

One of the people I have published with, a seasoned scientist, told me he has the impression that an increasing number of authors cease to make use of an “internal” revision process (i.e. asking your colleagues, friends, or whoever) before submitting a paper and tend to burden some of the more tedious work (making your paper readable instead of just putting together the results and some refs) on the reviewers. He said the reason might be that people say: “the paper will have to revised anyway, so why bother”. But that is not what I think the review process should be about. I want to think about the scientific quality of a paper: does that paper in any way advance our knowledge, our insights, our “Erkenntnisse” about the phenomena analyzed? have the authors proceeded in an acceptable way? Did they adhere to the methodolological standards? I do not want to think about how a paper I haven’t written myself can be made more readable and comprehensible.

Posted by: wolf | June 6, 2008

Social Psychology Automata

I just read John Kihlstrom’s article “The Automaticity Juggernaut” (TAJ), where he gives what he had already handed out to Daniel Wegner: “a good scolding”, in Wegner’s words, when Kihlstrom had commented on Wegner’s “précis of The illusion of conscious will“. In TAJ, Kihlstrom shows that he not only disdains Wegner’s take on the psychological side of mental causation: he extends that view to other prominent social psychologists, most noteably John Bargh. Like that of his opponents, especially Wegner’s, Kihlstrom’s writing is entertaining and provocative, and all have put some of their papers online; if you are interested in psychologists’ views on free will and mental causation (and these days everybody seems to be concerned with the issue), you might want to check.

Now what is this all about?

Conscious will … is an indication that we think we have caused an action, not a revelation of the causal sequence by which the action was produced. (Wegner, p. 649; emphasis original. see link above)

and

Much of social life is experienced through mental processes that are not intended and about which one is fairly oblivious. (Bargh & Williams, 2006, p. 1).

Kihlstrom interprets quotes such as these (and, of course, the longer articles they are taken out of) like statements of the belief that there is no free will. Wegner and Bargh, in Kihlstrom’s interpretation, deny the possibility of conscious mental causation: that what we believe to be the cause of our doing something, i.e. conscious decisions to do something, is the “true” cause of our doing. Read More…

Posted by: wolf | June 4, 2008

Statistical self-immunization

(Warning: serious, difficult content interspersed with ranting and exasperation)

A possible source of empiricism (i.e. overreliance on empirical results combined with devaluation of, or at least actively ignoring, theoretical analysis) in psychology is a phenomenon that often occurs in statistical analyses, but even more so, the more variables you put into the analysis: you won’t get a clear, unambiguous result. Got your attention? Read on!

Read More…

Posted by: wolf | June 3, 2008

the lack of theory in psychology

Psychology is (at least in the mainstream) an empirical science. What counts in the scientific psychological community and for most of the psychologists that have been educated scientifically are the results of empirical research. For example, many psychologists will sneer at psychoanalytic theory; however, they are much less likely to sneer at the results of psychoanalytic therapy, given that it is notoriously hard to demonstrate substantial differences in effectivity between the more established forms of therapy. [Note: I am not an expert in psychotherapy; there are differences in effectivity conditional on the kind of disorder; there are difference in effectivity conditional on the person to be therapized].

Another example, going further into basic research, is a study on executive functions by Miyake et al. (2000). Executive functions are those that enable us to maintain our acting on a goal set over time and to organize different kinds of behavior required to reach a goal; e.g. to shift our attention between the paper we have to write and the students that keep knocking at our door, or to refrain from ordering a pizza and instead stick to the salad. With their factor-analytic study of the empirical relations between different tasks presumed to measure executive functions, Miyake et al, (2000) single-handedly took over definitional authority of what counts as executive functions at least in the land of experimental psychologists, exemplified by the more than 600 citations their paper brings up in google scholar. Now, many experimental psychologists would agree that executive functions are comprised of “shifting, updating [working memory] and inhibition”; see e.g. a recent review on executive functions in preschool children. No theoretical classification of executive functions would ever have been so successful in psychology.

Isn’t that actually a good thing? Of course, empirical results really should count more than theory whenever there is no empirical evidence to support a theory. But the condition in italics is important. To forgo theory can seriously stall empirical success. My pet example is personality psychology: many psychologists are just so convinced that there are universal traits that can be observed in every single human being, and that there are only differences in degree but not in kind, i.e. every person can be assigned a value in extraversion. If you stay inside that approach and don’t ever think of changing to another perspective, you are going to have to get attuned to weak empirical associations between those trait scores and other criteria. Walter Mischel has famously criticised trait psychology for being unable to generate correlations with relevant criteria above r = .30 (which is not very high) in a book published in 1968, and things have not changed very much in the last 40 years — see e.g. Barrick, Mount and Judge’s meta-meta analysis on the relation of the Big Five to job performance.

Why researchers still hand on to their old theories and methods is somewhat mysterious. One reason might be that the results of many empirical-statistical methods are not unequivocal. For example, the disappointingly low correlations of personality trait measures with criteria such as job performance are often explained away with low reliabilities and range restriction. In essence this leads to conclusions like “if the measures had perfect reliability and if there was no range restriction, the ‘real’ correlation would be much higher than the one observed”. So instead of thinking about better ways to measure something or contemplating a theoretical change, people stick to their old suboptimal ways.

Posted by: wolf | May 29, 2008

The nature of the Big Five

Just yesterday I wrote about the lexical hypothesis in personality psychology. Even if there are now some researchers proposing a six-factor general personality trait theory, the Big Five (or the OCEAN model: openness, conscientiousness, extraversion, agreeableness, and neuroticism) is probably the most popular general personality (trait) theory by far. Among the Big Five theorists, there is a group led by Paul Costa and Robert McCrae who have somewhat dissociated from the lexical origin of the Big Five and in their more recent writings propose that the Big Five are not only descriptive terms, but biologically based “basic tendencies” (e.g. in McCrae, Costa, Ostendorf, Angleitner, Hrebickova et al., 2000)¹. However, they do not really explain what “basic tendencies” might mean; instead they put forward evidence for cultural generalizability and long-term stability mainly of questionnaire scores. The argumentation is kind of backwards: They say, “look, the questionnaire scores are cross-culturally and intraindividually stable. They must be biologically based”. This is in itself a very weak argument; one might put forward all kinds of objections like that the long-term stability does not really tell us about something to be biologically based, and the cross-cultural generalizability has been heavily disputed, see e.g. here.

Lisa Pytlik Zillig, Hemenover and Dienstbier (2002) have explored another line of evidence against the “basic tendencies” claim of Costa-McCrae theorists. Instead of meeting them on their own grounds, Pytlik Zillig et al. simply took several Big Five inventories (questionnaires and adjective lists) and analyzed the content of the items of these inventories. They tried to classify the items into one of three categories: how much does an item describe affects, behaviors, and cognitions? (the distinction of affect, behavior, and cognition is quite common in psychology; e.g. there are some models of attitudes that assume an attitude has something of all three, an [evaluative] affective component, a behavioral component [how does one react], and a cognitive, non-evaluative component).

The results are somewhat disappointing for the “basic tendencies” idea: across different inventories and across different groups of raters, items for the Big Five factors differ systematically with regard to how much they reflect affect, behavior, and cognition. Most striking is the difference between items assumed to assess neuroticism vs. items for conscientiousness: where items for the former are mainly (60-90%) about affects, items for the latter are mainly about behaviors (again 60-90%), and almost none of conscientiousness items describe affects. Only for items about agreeableness the three categories are represented almost equally.

What does that mean for the idea about the Big Five being “biologically based basic tendencies”? The results cast serious doubt on that idea. If the basic tendencies idea was right, one would assume that all of the Big Five factors correspond to affects, behaviors or behavioral tendencies, and cognitions equally. The result that some factors have more to do with affects (i.e. neuroticism, and to some extent extraversion), others are mainly about cognitions (openness) and a third category has to do with behavior (conscientiousness and extraversion) is much more compatible with the idea that the Big Five are about different things and not operating at the same level, i.e. that they are not (all) basic tendencies. If one thinks of a “basic tendency” as something in the brain, say, somebody with a higher level of neuroticism being more excitable in some neural network; how might this “basic tendency” be comparable to a “basic tendency” like conscientiousness that seems to be mainly about behaviors? In the words of Pytlik Zillig et al.:

Assume for the moment that there is some very basic core or reality to Big 5–level traits, that the ABC [affect, behavior, and cognition] dimensions are highly meaningful constructs for assessing that core, and that the operational definitions of traits on ABC dimensions in major inventories reasonably reflect those underlying latent traits. Given those assumptions, our findings suggest that abstract arguments (and conceptual definitions of traits, such as found in personality texts) about the basic nature of traits may miss the mark.

As a final remark: this study again shows that words are not equal to their meanings in personality psychology. The Costa and McCrae theorists claim that their Big Five are basic tendencies, biologically based and causally operating. But their instruments do not support this claim. The content analysis of the instruments reveal that the Big Five are different with regard to what they refer to.

¹ I realize that most of the links require subscriptions, but I can’t help citing peer reviewed sources.

Older Posts »

Categories

Follow

Get every new post delivered to your Inbox.