Top 10 Bitss Papers Today


0.0 Mikeys
#1. Dissertation R.C.M. van Aert
Robbie C. M. van Aert
More and more scientific research gets published nowadays, asking for statistical methods that enable researchers to get an overview of the literature in a particular research field. For that purpose, meta-analysis methods were developed that can be used for statistically combining the effect sizes from independent primary studies on the same topic. My dissertation focuses on two issues that are crucial when conducting a meta-analysis: publication bias and heterogeneity in primary studies’ true effect sizes. Accurate estimation of both the meta-analytic effect size as well as the between-study variance in true effect size is crucial since the results of meta-analyses are often used for policy making. Publication bias distorts the results of a meta-analysis since it refers to situations where publication of a primary study depends on its results. We developed new meta-analysis methods, p-uniform and p-uniform*, which estimate effect sizes corrected for publication bias and also test for publication bias. Although the methods...
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 1
Total Words: 115012
Unqiue Words: 12731

0.0 Mikeys
#2. Mapping the Universe of Registered Reports
Tom E Hardwicke, john Ioannidis
Registered reports present a substantial departure from traditional publishing models with the goal of enhancing the transparency and credibility of the scientific literature. We map the evolving universe of registered reports to assess their growth, implementation, and shortcomings at journals across scientific disciplines.
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 6489
Unqiue Words: 2439

0.0 Mikeys
#3. Examining the Reproducibility of Meta-Analyses in Psychology: A Preliminary Report
Daniel Lakens, Elizabeth Page-Gould, Marcel A.L.M. van Assen, Bobbie Spellman, Felix Schönbrodt, Fred Hasselman, Katherine S. Corker, James A. Grange, Amanda Sharples, Corinne Cavender
Meta-analyses are an important tool to evaluate the literature. It is essential that meta-analyses can easily be reproduced to allow researchers to evaluate the impact of subjective choices on meta-analytic effect sizes, but also to update meta-analyses as new data comes in, or as novel statistical techniques (for example to correct for publication bias) are developed. Research in medicine has revealed meta-analyses often cannot be reproduced. In this project, we examined the reproducibility of meta-analyses in psychology by reproducing twenty published meta-analyses. Reproducing published meta-analyses was surprisingly difficult. 96% of meta-analyses published in 2013-2014 did not adhere to reporting guidelines. A third of these meta-analyses did not contain a table specifying all individual effect sizes. Five of the 20 randomly selected meta-analyses we attempted to reproduce could not be reproduced at all due to lack of access to raw data, no details about the effect sizes extracted from each study, or a lack of information...
more | pdf
Figures
None.
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 10
Total Words: 6763
Unqiue Words: 1972

0.0 Mikeys
#4. Will knowledge about more efficient study designs increase the willingness to pre-register?
Daniel Lakens
Pre-registration is a straightforward way to make science more transparant, and control Type 1 error rates. Pre-registration is often presented as beneficial for science in general, but rarely as a practice that leads to immediate individual benefits for researchers. One benefit of pre-registered studies is that they allow for non-conventional research designs that are more efficient than conventional designs. For example, by performing one-tailed tests and sequential analyses researchers can perform well-powered studies much more efficiently. Here, I examine whether such non-conventional but more efficient designs are considered appropriate by editors under the pre-condition that the analysis plans are pre-registered, and if so, whether researchers are more willing to pre-register their analysis plan to take advantage of the efficiency benefits of non-conventional designs. Study 1 shows the large majority of editors judged one-tailed tests and sequential analyses to be appropriate in psychology, but only when such analyses...
more | pdf
Figures
None.
Tweets
lakens: @Liikennepsykol1 @dcjohnsonphd See https://t.co/ErzcIE3myi
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 1
Total Words: 6583
Unqiue Words: 1919

0.0 Mikeys
#5. Transparency, Reproducibility, and the Credibility of Economics Research
Garret Christensen, Edward Miguel, Jennifer Sturdy
There is growing interest in enhancing research transparency and reproducibility in economics and other scientific fields. We survey existing work on these topics within economics, and discuss the evidence suggesting that publication bias, inability to replicate, and specification searching remain widespread in the discipline. We next discuss recent progress in this area, including through improved research design, study registration and pre-analysis plans, disclosure standards, and open sharing of data and materials, drawing on experiences in both economics and other social sciences. We discuss areas where consensus is emerging on new practices, as well as approaches that remain controversial, and speculate about the most effective ways to make economics research more credible in the future.
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 34499
Unqiue Words: 8760

0.0 Mikeys
#6. Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition
Tom E Hardwicke, Maya Mathur, Kyle MacDonald, Gustav Nilsonne, George Christopher Banks, Mallory C. Kidwell, Alicia Hofelich Mohr, Elizabeth Clayton, Erica J. Yoon, Michael Henry Tessler
Access to data is a critical feature of an efficient, progressive, and ultimately self-correcting scientific ecosystem. But the extent to which in-principle benefits of data sharing are realized in practice is unclear. Crucially, it is largely unknown whether published findings can be reproduced by repeating reported analyses upon shared data (“analytic reproducibility”). To investigate, we conducted an observational evaluation of a mandatory open data policy introduced at the journal Cognition. Interrupted time-series analyses indicated a substantial post-policy increase in data available statements (104/417, 25% pre-policy to 136/174, 78% post-policy), although not all data appeared reusable (23/104, 22% pre-policy to 85/136, 62%, post-policy). For 35 of the articles determined to have reusable data, we attempted to reproduce 1324 target values. Ultimately, 64 values could not be reproduced within a 10% margin of error. For 22 articles all target values were reproduced, but 11 of these required author assistance. For 13 articles...
more | pdf
Figures
Tweets
sharoz: @dingding_peng This one looked at reproducibility in Cognition - https://t.co/RrAHHd4tN5
sharoz: @davidmanheim @JoshuaDWallach @METRICStanford Other fields are doing better: 1. https://t.co/DOqKOadFdV 2. https://t.co/RrAHHd4tN5 https://t.co/BXqakgLb6m
pegleraj: Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition. https://t.co/2EMeiPl7Ze. Very interesting meta-research, thanks @Tom_Hardwicke and team!
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 10
Total Words: 20760
Unqiue Words: 5776

0.0 Mikeys
#7. How Often Should We Believe Positive Results? Assessing the Credibility of Research Findings in Development Economics
Aidan Coville, Eva Vivalt
Under-powered studies combined with low prior beliefs about intervention effects increase the chances that a positive result is overstated. We collect prior beliefs about intervention impacts from 125 experts to estimate the false positive and false negative report probabilities (FPRP and FNRP) as well as Type S (sign) and Type M (magnitude) errors for studies in development economics. We find that the large majority of studies in our sample are generally credible. We discuss how more systematic collection and use of prior expectations could help improve the literature.
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 8404
Unqiue Words: 2335

0.0 Mikeys
#8. Publication Bias in Meta-Analyses from Psychology and Medicine: A Meta-Meta-Analysis
Robbie C. M. van Aert, Jelte Wicherts, Marcel A.L.M. van Assen
Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias is widespread, how strongly it affects different scientific literatures is currently less well-known. We examine evidence of publication bias in a large-scale data set of meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and the Cochrane Database of Systematic Reviews (representing meta-analyses from medical research). Psychology is compared to medicine, because medicine has a longer history than psychology with respect to preregistration of studies as an effort to counter publication bias. The severity of publication bias and its inflating effects on effect size estimation were systematically studied by applying state-of-the-art publication bias tests and the p-uniform method for estimating effect size corrected for publication...
more | pdf
Figures
Tweets
DenOlmo: @chrisdc77 @BrianNosek However, two recent preprints based on meta-analyses in Psychological Bulletin and Cochrane Library find that there is not much evidence for publication bias. Very suprising, but that's what the data say. https://t.co/yi00SpPBAQ https://t.co/eDawOBymSf
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 16728
Unqiue Words: 4395

0.0 Mikeys
#9. Identification of and correction for publication bias
Maximilian Kasy, Isaiah Andrews
Some empirical results are more likely to be published than others. Selective publication leads to biased estimates and distorted inference. We propose two approaches for identifying the conditional probability of publication as a function of a study’s results, the first based on systematic replication studies and the second on meta-studies. For known conditional publication probabilities, we propose bias-corrected estimators and confidence sets. We apply our methods to recent replication studies in experimental economics and psychology, and to a meta-study on the effect of the minimum wage. When replication and meta-study data are available, we find similar results from both.
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 26826
Unqiue Words: 4604

0.0 Mikeys
#10. Predicting the Replicability of Social Science Lab Experiments
Adam Altmejd, Anna Dreber Almenberg, Eskil Forsell, Teck-Hua Ho, Juergen Huber, Taisuke Imai, Magnus Johannesson, Michael Kirchler, Gideon Nave, Colin Camerer
We measure how accurately replication of experimental results can be predicted by a black-box statistical model. With data from four large- scale replication projects in experimental psychology and economics, and techniques from machine learning, we train a predictive model and study which variables drive predictable replication. The model predicts binary replication with a cross validated accuracy rate of 70% (AUC of 0.79) and relative effect size with a Spearman ρ of 0.38. The accuracy level is similar to the market-aggregated beliefs of peer scientists (Camerer et al., 2016; Dreber et al., 2015). The predictive power is validated in a pre-registered out of sample test of the outcome of Camerer et al. (2018b), where 71% (AUC of 0.73) of replications are predicted correctly and effect size correlations amount to ρ = 0.25. Basic features such as the sample and effect sizes in original papers, and whether reported effects are single-variable main effects or two- variable interactions, are predictive of successful replication. The...
more | pdf
Figures
None.
Tweets
rei_akaishi: Predicting the Replicability of Social Science Lab Experiments https://t.co/u7JIGjHix9
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 10
Total Words: 7771
Unqiue Words: 2553

About

Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.

Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).

To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).

To see beautiful figures extracted from papers, follow us on Instagram.

Tracking 72,893 papers.

Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Feedback
Online
Stats
Tracking 72,893 papers.