[Previous message][Next message][Back to index]
[ecrea] Call for Papers: Questionable Research and Publication Practices in Communication Science (Communication Methods & Measures)
Fri Dec 06 01:34:45 GMT 2013
Call for Papers: Questionable Research and Publication Practices in
Communication Science
Special Issue of Communication Methods & Measures
Editors of Special Issue
Tilo Hartmann ((t.hartmann /at/ vu.nl))
Ivar Vermeulen ((i.e.vermeulen /at/ vu.nl))
Department of Communication Science
VU University Amsterdam
The Problem
Across the globe, scientific research communities are engaged in heated
debates about scientific conduct and questionable research and
publication practices (often referred to as the “sloppy science”
debate). This debate centers on the prevalence of questionable
scientific practices (see Table 1 for an overview) and on the extent to
which such practices hinder scientific progress. Although the debate
originated in other research fields, such as Medicine (Ioannidis ,
2005), Criminology (Eisner, 2009), and Psychology (see, e.g., the
November 2012 issue of Perspectives in Psychological Science), it
clearly is relevant to the practice of communication science. This
special issue of Communication Method & Measures aims to spark a
discussion about “sloppy science” in communication research - a critical
reflection on our common research and reporting practices - with the
goal of potentially improving our standards heading into the future.
Misconduct vs. questionable research practices: Most scholars would hope
if not also agree that blatant scientific misconduct such as data
fabrication or plagiarism is fairly rare. Although better ways of
improving fraud detection perhaps need our attention, we believe a much
more interesting and impactful debate concerns more common practices
that are “questionable” rather than illegitimate. A compelling
demonstration of the consequences of employing such borderline practices
is provided by Simmons et al. (2011), who show that undisclosed
flexibility in data collection and analysis allows researchers to
“present anything as significant” (p. 1359). Questionable research
practices (e.g., developing hypotheses after data analysis, Kerr, 1998;
increasing sample size until results gets significant; not reporting
problematic cases, variables, experimental conditions) may be implicitly
encouraged by publication practices that focus on significant findings
and “good stories” (Kerr, 1998; Simmons et al., 2011; Levelt Committee
et al., 2012). Pressure to publish may also encourage researchers to
polish their manuscripts and to push aside ethical concerns about
research practices. As a result, many “false positive” findings end up
published (Nelson, Simmons, & Simonsohn, 2012) that are unlikely to
replicate if such replication attempts are undertaken (Francis, 2012).
- - - - - - - - - - - - - -
Table 1: Examples of questionable research practices
Compiled from Eisner (2009), Simmons et al. (2011), and Levelt Committee
et al. (2012).
>> P-hacking: Practices to optimize the relative number of accepted
hypotheses or significant results reported in a paper
1 HARKing: Hypothesizing After Results are Known (or: presenting
exploratory findings as confirmatory findings)
2 “Peeking” (collecting extra cases until significance is reached; not
conforming to pre-determined sample size)
3 Instrumentally omitting or collapsing experimental conditions
4 Instrumentally omitting or collapsing dependent/mediating variables
5 Instrumental removal or inclusion of outliers (i.e. without employing
pre-determined exclusion criteria)
6 Instrumental removal of scale items (i.e. without employing
pre-determined criteria for scale construction)
7 Instrumental composition of outcome scores (e.g., difference or change
scores, dichotomizing scores, not conforming to a pre-determined
analysis plan)
8 Instrumental use of covariates (i.e., not conforming to a
pre-determined analysis plan)
>> Reproducibility problems: Practices that hamper the reproducibility
of prior results
1 Incomplete reporting on research procedure
2 Incomplete reporting on used measurement instruments
3 Incomplete reporting about statistical tests applied
4 Presenting underpowered studies
5 Keeping incomplete records of raw data, analyses, materials
>> Publication bias: Practices that lead to selective publication of
results
1 Cherry picking: submitting / accepting only studies that “worked”;
ignoring studies that “failed” (also: the “file drawer” problem)
2 Replication problem: low incentives to replicate prior studies and
publish them
- - - - - - - - - - - - - -
We believe that communication science is a field just as likely to
suffer from questionable practices as any other field of research.
Therefore, we seek to compile a special issue of Communication Method &
Measures that contributes to a constructive debate focused on the
prevalence, determinants, forms, instances of, and successful
interventions against questionable research practices within
communication science. The goal is to increase awareness of questionable
research practices in our field, to illuminate the problem of false
positives and reproducibility in our field, and to contribute to the
ongoing discussion about how to further enhance our research and
reporting practices.
Thus, we issue this call for short empirical research reports that
examine questionable research and reporting practices in Communication
Science (for format issues please refer to the submission guidelines of
the journal).
Papers that qualify for consideration include those that...
(1) Document the prevalence of and reasons for questionable research and
reporting practices
>> We encourage the submission of empirical papers that address the
prevalence of or reasons for questionable research and reporting
practices in communication science. For example, we could imagine an
adaptation of the study about questionable research practices conducted
by John et al. (2012) to communication science.
>> In addition, we think it is also helpful to empirically examine
potentially problematic publication practices (e.g., a focus on “good
stories”, significant findings, accepted hypotheses, concise
methodological reporting, “new” stories rather than replications,
detrimental incentives for authors, reviewers, editors, etc.), as well
as the effectiveness of possible solutions (e.g., study
pre-registration, publication of data sets, supplementary material, etc.).
>> We also encourage content-analytical studies that examine to what
extent articles in leading Communication journals report sufficient
methodological information (e.g., confidence intervals, steps in
handling data like dropping of cases or variables, etc., see Simmons et
al., 2011). Also relevant in the present context is to what extent
communication scholars produce cumulative and comparable knowledge by
using standardized measurement instruments, or instead tend to adapt
existing instruments or develop them “ad-hoc”.
>> Furthermore, we are very much open to other ideas to empirically
address these issues.
or
(2) Reflect on Replication
>> Another set of short empirical reports may concern attempts to
replicate central research insights of communication science. Such
attempts could help the field to reflect on specific reproducibility
problems within the field and on possible solutions to improve
reproducibility (Koole & Lakens, 2012).
>> We like to encourage scholars to pick a central communication study,
try to exactly replicate it, and then to not only report the replication
but particularly also to reflect upon the replication attempt (e.g.,
encountered problems, etc.). Acceptance of replication studies will be
based entirely on the quality of submitted research proposals,
pre-registered through the Open Science Framework – hence before data
collection and regardless of their outcomes (see below).
>> Replication reports may be submitted as shorter papers, about 18
pages, double-spaced, 12 point, including references.
Submission Procedure
>> Early feedback about the general idea (until February 1st 2014): To
minimize overlap, we strive to prevent different scholars interested in
contributing to the special issue from submitting papers on the same
topic. Therefore, we suggest that potential contributors send a short
and informal email (see email contacts above) to both of editors of the
special issue in which they roughly sketch their submission idea.
Editors will indicate whether such a submission would fit the special
issue, and whether the contributor would be willing to collaboration
with others who propose a similar submission. Replicating authors will
receive further instructions on how to submit and pre-register a full
replication proposal.
>> Submission deadline for replication proposals: June 1st 2014
>> Submission deadline for other short empirical reports: September 1st
2014
>> Review of submitted replication proposals and empirical reports:
Following standard procedures of Communication Method & Measures, all
submissions will be evaluated in a blinded peer-review by two reviewers.
Editorial decisions ought to be announced within about 14 weeks after
submission deadlines.
References
>> Eisner, M. (2009). No effects in independent prevention trials: Can
we reject the cynical view? Journal of Experimental Criminology, 5, 163–183.
>> Francis, G. (2012). The psychology of replication and replication in
psychology. Perspectives on Psychological Science, 7(6), 585–594.
>> Ioannidis, J. P. A. (2005). Why most published research findings are
false. PLoS Medicine, 2(8), e124. R
>> John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the
prevalence of questionable research practices with incentives for
truth-telling. Psychological Science, 23, 524-532.
>> Kerr, N. L. (1998). HARKing: Hypothesizing after the results are
known. Personality and Social Psychology, 2, 196–217. doi:
10.1207/s15327957pspr0203_4
>> Koole, S.L., & Lakens, D. (2012). Rewarding replications: A sure and
simple way to improve Psychological Science. Perspectives on
Psychological Science, 7(6), 608–614.
>> Levelt Committee, Noort Committee, & Drenth Committee (2012). Flawed
science: The fraudulent research practices of social psychologist
Diederik Stapel. Retrieved from
http://www.tilburguniversity.edu/nl/nieuws-en-agenda/ finalreportLevelt.pdf
>> Nelson, L.D., Simmons, J.P., & Simonsohn, U. (2012). Let's publish
fewer papers. Psychological Inquiry: An International Journal for the
Advancement of Psychological Theory, 23(3), 291-293.
>> Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011).
False-positive psychology: Undisclosed flexibility in data collection
and analysis allows presenting anything as significant. Psychological
Science, 22, 1359–1366.
------------------ ZIP
---------------
ECREA-Mailing list
---------------
This mailing list is a free service from ECREA and Nico Carpentier.
--
To subscribe, post or unsubscribe, please visit
http://www.ecrea.eu/mailinglist
--
ECREA - European Communication Research and Education Association
--
Postal address:
ECREA
Chaussée de Waterloo 1151
1180 Uccle
Belgium
--
Email: (info /at/ ecrea.eu)
URL: http://www.ecrea.eu
---------------
[Previous message][Next message][Back to index]