200 Researchers, 5 Hypotheses, No Consistent Answers

Another team tested the same hypothesis by asking subjects to self-identify with a political party and then to rank their feelings about a hypothetical member of the opposition party. Using this approach, they found that people are very willing to report their own negative stereotypes. Meanwhile, a third team showed subjects photos of men and women who were white, black, or overweight (as well as of puppies or kittens) and asked them to rate their “immediate ‘gut level’ reaction towards this person.” Their results also showed that people did indeed cop to having negative associations with people from stigmatized groups.

When the study was over, seven groups had found evidence in favor of the hypothesis, while six had found evidence against it. Taken all together, these data would not support the idea that people recognize and report their own implicit associations. But if you’d seen results from only one group’s design, it would have been easy to come to a different conclusion.

The study found a similar pattern for four out of five hypotheses: Different research teams had produced statistically significant effects in opposite directions. Even when a research question produced answers in the same direction, the size of the reported effects were all over the map. Eleven of 13 research teams produced data that clearly supported the hypothesis that extreme offers make people less trusted in a negotiation, for example, while findings from the other two were suggestive of the same idea. But some groups found that an extreme offer had a very large effect on trust, while others found that the effect was only minor.

Video: WIRED Staff

The moral of the story here is that one specific study doesn’t mean very much, says Anna Dreber, an economist at the Stockholm School of Economics and an author on the project. “We researchers need to be way more careful now in how we say, ‘I’ve tested the hypothesis.’ You need to say, ‘I’ve tested it in this very specific way.’ Whether it generalizes to other settings is up to more research to show.”

This problem—and this approach to demonstrating it—isn’t unique to social psychology. One recent project similarly asked 70 teams to test nine hypotheses using the same data set of functional magnetic resonance images. No two teams used the exact same approach, and their results varied as you might expect.

try what she says
understanding
updated blog post
url
us
use this link
via
view
view it
view it now
view publisher site
view siteÂ…
view website
visit
visit here
visit homepage
visit our website
visit site
visit the site
visit the website
visit their website
visit these guys
visit this link
visit this page
visit this site
visit this web-site
visit this website
visit your url
visite site
web
web link
web site
weblink
webpage
website
website link
websites
what do you think
what google did to me
what is it worth
why not check here
why not find out more
why not look here
why not try here
why not try these out
why not try this out
you can check here
you can find out more
you can look here
you can try here
you can try these out
you can try this out
you could check here
you could look here
you could try here
you could try these out
you could try this out
your domain name
your input here
have a peek at this web-site
Source
have a peek here
Check This Out
this contact form
navigate here
his comment is here
weblink
check over here
this content
have a peek at these guys
check my blog
news
More about the author
click site
navigate to this website
my review here
get redirected here

If one were judging only by the outcomes of these projects, it might be reasonable to guess that the scientific literature would be a thicket of opposing findings. (If different research groups often arrive at different answers to the same questions, then the journals should be filled with contradictions.) Instead, the opposite is true. Journals are full of studies that confirm the existence of a hypothesized effect, while null results are squirreled away in a file drawer. Think of the results described above on the implicit-bias hypothesis: Half the groups found evidence in favor and half found evidence against. If this work had been carried out in the wilds of scientific publishing, the former would have taken root in formal papers, while the rest would have been buried and ignored.

The demonstration from Uhlmann and colleagues suggests that hypotheses should be tested in diverse and transparent ways. “We need to do more studies trying to look at the same idea with different methods,” says Dorothy Bishop, a psychologist at the University of Oxford. That way, you can “really clarify how solid it is before you’re jumping up and down and making a big dance about it.”

The results certainly argue for humility, Uhlmann says. “We have to be careful what we say in the article, what our university says in the press release, what we say in the media interviews. We need to be cautious about what we claim.” The incentives push toward making big claims, but good science probably means slowing down and exercising more caution.