One of the mainstream positions in general personality theory (i.e. approaches that try to characterize all of personality) is the lexical hypothesis. This hypothesis has been stated e.g. by John, Angleitner and Ostendorf (1987; subscription necessary) as follows:

Those individual differences that are most salient and socially relevant in people’s lives will eventually become encoded into their language; the more important such a difference, the more likely it is to become expressed as a single word. (p. 175)

This hypothesis is the origin of the five-factor model of personality; I won’t review the complicated history, but basically one could say that Allport and Odbert, adopting the lexical hypothesis, started with a huge collection of adjectives and somewhat arbitrarily reduced that list, which then became the starting point for Cattell’s research on general personality traits which is also characterized somewhat by arbitrariness and somewhat lacking reproducibility. Nevertheless, all those joint efforts at some time led to a gargantuan body of papers claiming that the number of personality traits is five, and only five, and thou shalt not doubt Costa and McCrae, and they are the prophets of the BIG FIVE MODEL.

Ok, I got carried away a little bit; Costa and McCrae actually have started to claim that their model isn’t really based on the lexical hypothesis anymore, and they are not using adjective scales, but still their basic hypothesis of five factors has been derived from the lexical hypothesis. And what’s more, Lewis Goldberg, probably the most important figure in mainstream personality psychology apart from C&McC, actually developed adjective marker scales for the Big Five. (I realize that nowadays Goldberg and others think there are not only five, but six general personality traits, e.g. here, but the point I want to make applies nevertheless).

Here’s a simple rebuttal of the lexical hypothesis, taken from Cervone and Lott (2007, probably a subscription necessary): The lexical hypothesis states that people invent a word for things (specifically personality characteristics) that are important for them. This hypothesis has been based e.g. on the myth that there are many more terms for snow in Inuit language than e.g. in English. But this latter assumption is wrong! Cervone and Lott cite sources showing that there are actually “Some Eskimo-Aleut languages [that] have fewer such words than does English” (p. 431). So if there is something wrong with the foundation of the lexical hypothesis, what does that tell us about the models that are based on this approach?

Advertisements

(the first part doesn’t really have to do with the second, except for its silliness)

Yesterday I listened to a talk about (psychometric) reliability estimation for instruments composed of binary items (as in true/false). It wasn’t really that interesting, but OTOH not as boring as it may seem. Still, most of what I remember is one thing the speaker repeated several times:

“just one line of code and we’re in the 21st century!”

He said there was a saying among statisticians: in the 20th century, it was all about point estimates; now in the 21st century, it’s about estimating confidence intervals for those parameters. I really do like this saying, but he really made a caricature of it (and himself, by the way) since he did never show us how the CIs in his special case were estimated (in terms of formulae) — he just showed us the input file for the Mplus program he used in his special case ending with

OUTPUT: CINTERVAL;

Don’t you feel the magic of that line? ONE LINE OF CODE AND WE’RE IN THE 21ST CENTURY! ONE LINE OF CODE AND WE’RE IN THE 21ST CENTURY! ONE LINE OF CODE AND WE’RE IN THE 21ST CENTURY! ONE LINE OF CODE AND WE’RE IN THE 21ST CENTURY!

I have to stop this.

I was in a hilarious mood anyway, because on the way I had seen someone was planning an experiment on altruism with fMRI. Well, he wouldn’t be the first. Actually, I don’t know if this isn’t even more awesome than the one line of code in the 21st century:

The brain’s reward center lights up on an MRI image when subjects give money to charity.

I repeat:

The brain’s reward center lights up on an MRI image when subjects give money to charity.

I think my own brain’s reward center is so much lightening up right now, I can turn off the lights; and don’t even mention boobies.

Posted by: wolf | May 8, 2008

pie chart fail

The pie chart may be the most hated way to plot some data. Company executives love them, but statisticians don’t — you might want to check this to see why.

Now I found an example of what seems an even worse kind of graph… the ring or donut chart. I made the following in OpenOffice:

(the example comes from a non-published report on student admissions in Germany).

Well, the data are actually of the one kind pie charts would be suited for (taken from Wikipedia)

pie charts can be an effective way of displaying information in some cases, in particular if the intent is to compare the size of a slice with the whole pie, rather than comparing the slices among them.

For the student admission example, we have percentages adding up to 100 and we might want to visually compare the sizes of the different fractions, but even that gets blurred by the silly donut-display.

It was hard to find an even worse chart for these data, but I tried:

What do you say?

Posted by: wolf | April 15, 2008

Optimistic brains

ResearchBlogging.org

popular media shows: it’s our brains that make the world go round. Well, what a surprise. This time I couldn’t resist commenting because I already knew the scientific paper in question, and because I have done psychological research on optimism (that’s what the paper is about) myself.

It was a headline on German E-Mail-provider gmx.de’s website “Optimism emerges from the brain” that almost made me spill my coffee over the keyboard. I won’t really repeat how stupid such a line is. Where else should optimism come from? In this case, however, it is really mostly the journalists’ fault to come up with the Duh! factor. The original research isn’t that simplistic:

In an imaging study, participants were asked to think of “autobiographical events related to a description of a life episode (for example, ‘winning an award’ or ‘the end of a romantic relationship’)”, either of real events from their own past, or of future events that might happen to them; events were classified as “positive” or “negative” (the small number of neutral events were discarded). Afterwards, participants also were asked to rate their memories and the projections (imaginations of future life events) with regard to several qualities, e.g. how much the imagination felt “vivid” or how near or far in time it felt.

Apart from the imaging results (below) it was found that more optimistic participants (as measured with the “life orientation test” (LOT-R) reported that they expected “positive events to happen closer in the future than negative events, and to experience them with a greater sense of pre-experiencing”.
I find this interesting from a purely psychological point of view, even if it is not a wholly original result: it shows that optimism is not just a static more-or-less dimension, but instead refers to something proactive. Being optimistic means to be able to imagine positive outcomes.

Now for the neuroscience results: when imagining positive events, there was more brain activity (Blood Oxygen Level Dependent signal) in the rostral anterior cingulate cortex (rACC) and the amygdala than when imagining negative events, the former region having been identified as involved in the processing of autobiographical memory as well as the imagination of future events, and the latter as involved of the processing of emotion, also in autobiographical memory. Most interestingly, rACC activity was correlated with LOT-R scores: participants with a greater difference in BOLD signal for positive vs. negative events on average had higher scores on the LOT-R. Again this is interesting because the imaging results refer to actively imagining something, whereas the LOT-R is assumed to measure stable differences.

So after all this is an example of a nice integration of psychological with neuroscience results: the imaging results from active imagination of future events lend some validity to the static LOT-R questionnaire scores.

Sharot, T., Riccardi, A.M., Raio, C.M., Phelps, E.A. (2007). Neural mechanisms mediating optimism bias. Nature, 450(7166), 102-105. DOI: 10.1038/nature06280

Posted by: wolf | April 10, 2008

Within the green the mouldered tree my neurons fire

Like the man without qualities, I will just point you toward Raymond Tellis’ taking apart of neurobabble in literature. It is incredibly informed, and really worth the read.

Posted by: wolf | March 28, 2008

Why brain imaging is (sometimes) overrated

Here, I don’t want to discuss any specific brain imaging study, and I am being careful in the title: I would bet a large amount of money that there are at least as many crappy behavioral studies out there as there are crappy brain imaging studies. Instead, I will try to give a brief explanation why psychologists or behavioral scientists can get rather angry when they have to read, view or hear yet another silly popular media coverage of a scientist in a lab coat with coloured brains flashing in the background stating something like “the brain’s reward system is active when something feels good”. Duh!

In a nutshell: Brain imaging attracts a disproportionate amount of attention while often failing to be innovative. Above that, brain imaging studies are excessively expensive and yet often fail to meet the methodological standards that apply to behavioral experiments.

I will try to give some more background:

First of all, a lot of brain imaging studies simply repeat older behavioral experiments and then show that something somewhere in the brain “lights up”. Of course, this kind of research sometimes lead to valuable results, but often you will just have something like whenever subjects are required to do something involving language and higher order cognition (applying some kind of rule or whatever), then you will have a coloured region in the prefrontal cortex (higher order cognition) and a coloured Broca- and/or Wernecke region — but that’s something we have known for decades.

Then, obviously there are physical limits to what kind of experiments can be done while a person lies inside a huge tube like this one:

varian4t.jpg

You can not really move inside of a scanner — as a matter of fact, researchers will routinely try to stop you from moving because brain activity connected with movement will distort the results. The only movements typically allowed are pressing of buttons, to indicate some kind of reaction to stimuli presented on a computer screen. And above being prohibited to move, it is incredibly loud inside of scanner — the sound is similar to standing beside a jackhammer. So clearly there are many psychologically interesting experiments that can never be done inside of a scanner.

These first points (repeating older behavioral studies, failing to yield novel results, limitation to pressing buttons inside of a tube whilst having to endure some rather heavy noise) can really make a creative psychologist angry. But there is more:

A brain scan is incredibly (for behavioral scientists) expensive; the costs for examining a single person can be something like several thousands of dollars. These costs restrict the number of subjects for the typical imaging study to something like ten to thirty persons; the claustrophobic atmosphere and noise will typically restrict sampling to healthy young adults; imaging noise induced by unwanted movement and other experimental error may further decrease the sample size when the respective images have to be excluded from further analysis. Of course, in behavioral experiments you will also have to exclude participants, but because of the lesser costs such loss is far less detrimental. In behavioral studies sample sizes are way larger for a reason (well, at least one): Statistical power. With a small number of participants like in the typical imaging study, scientists will only be able to substantiate the really obvious results — like that the prefrontal cortex is involved in higher order cognition — but the more detailed analysis could only be done with several times more of participants. You can not find out much about individual differences when you just look at a very small number of people, because when one or two out of twelve participants show a different kind of brain activity than the others, that deviation has to be attributed to experimental error. I would estimate that you could do 5 to 10 behavioral experiments (with sample sizes of 60-80!) for the money spent on a typical imaging study.

And despite all that, laypeople will still be more convinced by a crappy explanation when there is a coloured brain attached…

homer-simpson-brain-1024.jpg

Posted by: wolf | March 27, 2008

yeah, this IS awesome

Well, I was briefly contemplating to comment on brain imaging studies. But then…

I am going to have some coffee, definitely.

Posted by: wolf | March 26, 2008

Robots

now I am back and updating on what has happened in the blogosphere — well, at least in the small part I am trying to keep track of. Via Derek James I found the below video:

Note how the robot slips on ice, but quickly regains balance. Adapting a German saying we might conclude: now robots will get our cows off the ice…

Posted by: wolf | March 7, 2008

Away

Now this is the last post for some days — I am almost off on my holidays 🙂

Posted by: wolf | March 7, 2008

Oh, those individualized white bears on my mind

ResearchBlogging.org
How many times in the past month have you thought about a white bear? Chances are high that you didn’t ever, unless you’ve heard about the polar bear cub “Flocke” (snowflake) that was born in Nuremberg (Germany) and rejected by its mother.

But wait — I am not going to write about Flocke, unbearably (pardon the pun) cute though she may be.

Let me first give some background before I actually comment on the really nice study by Brewin and Smart that earned me the BPR3 icon. For psychogists, the phrase “don’t think of a white bear” is going to ring a bell (I really have to stop that wordplay…). In the ’80s, Daniel Wegner and colleagues published a seminal study on thought suppression where they asked people not to think about a white bear for some time; during that time, participants had to ring a bell whenever they thought of a bear in spite of their attempts at suppressing the idea. The result is a classic: Read More…

« Newer Posts - Older Posts »

Categories