Top 10 Bitss Papers Today


0.0 Mikeys
#1. Dissertation R.C.M. van Aert
Robbie C. M. van Aert
More and more scientific research gets published nowadays, asking for statistical methods that enable researchers to get an overview of the literature in a particular research field. For that purpose, meta-analysis methods were developed that can be used for statistically combining the effect sizes from independent primary studies on the same topic. My dissertation focuses on two issues that are crucial when conducting a meta-analysis: publication bias and heterogeneity in primary studies’ true effect sizes. Accurate estimation of both the meta-analytic effect size as well as the between-study variance in true effect size is crucial since the results of meta-analyses are often used for policy making. Publication bias distorts the results of a meta-analysis since it refers to situations where publication of a primary study depends on its results. We developed new meta-analysis methods, p-uniform and p-uniform*, which estimate effect sizes corrected for publication bias and also test for publication bias. Although the methods...
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 1
Total Words: 115012
Unqiue Words: 12731

0.0 Mikeys
#2. The evolving preprint landscape: Introductory report for the Knowledge Exchange working group on preprints.
Jonathan Tennant, Serge Bauin, Sarah James, Juliane Kant
Introductory report for the Knowledge Exchange working group on preprints.
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 4634
Unqiue Words: 1865

0.0 Mikeys
#3. Mapping the Universe of Registered Reports
Tom E Hardwicke, john Ioannidis
Registered reports present a substantial departure from traditional publishing models with the goal of enhancing the transparency and credibility of the scientific literature. We map the evolving universe of registered reports to assess their growth, implementation, and shortcomings at journals across scientific disciplines.
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 6489
Unqiue Words: 2439

0.0 Mikeys
#4. Likelihood Ratios: A Tutorial
Scott
Many in psychology view their choice of statistical approaches as being between frequentist and Bayesian. However, a third approach, the use of likelihood ratios, provides several distinct advantages over both the frequentist and Bayesian options. A quick explanation of the basic logic of likelihood ratios is provided, followed by a comparison of the likelihood-based approach to frequentist and Bayesian methods. The bulk of the paper provides examples with formulas for computing likelihood ratios based on t-scores, ANOVA outputs, chi-square statistics, and binomial data, as well as examples of using likelihood ratios to test for models that make a priori predictions of effect sizes. Finally, advice on interpretation is offered.
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 1
Total Words: 6155
Unqiue Words: 1768

0.0 Mikeys
#5. Declaring and Diagnosing Research Designs
Graeme Blair, Jasper Cooper, Alexander Coppock, Macartan Humphreys
Researchers need to select high-quality research designs and communicate those designs clearly to readers. Both tasks are difficult. We provide a framework for formally “declaring” the analytically relevant features of a research design in a demonstrably complete manner, with applications to qualitative, quantitative, and mixed methods research. The approach to design declaration we describe requires defining a model of the world (M), an inquiry (I), a data strategy (D), and an answer strategy (A). Declaration of these features in code provides sufficient information for researchers and readers to use Monte Carlo techniques to diagnose properties such as power, bias, correct identification of causal conditions, and other “diagnosands.” Ex ante declarations can be used to improve designs and facilitate preregistration, analysis, and reconciliation of intended and actual analyses. Ex post declarations are useful for describing, sharing, reanalyzing, and critiquing existing designs. We provide open-source software, DeclareDesign, to...
more | pdf
Figures
None.
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 4
Total Words: 32290
Unqiue Words: 6343

0.0 Mikeys
#6. Why We Need Open Policy Analysis
Fernando Hoces de la Guardia, Sean Grant, Edward Miguel
The evidence-based policy movement promotes the use of empirical evidence to inform policy decision-making. While this movement has gained traction over the last two decades, several concerns about the credibility of empirical research have been identified in scientific disciplines that use research methods and practices that are commonplace in policy analysis. As a solution, we argue that policy analysis should adopt the transparent, open, and reproducible research practices espoused in related disciplines. We first discuss the importance of evidence-based policy in an era of increasing disagreement about facts, analysis, and expertise. We then review recent credibility crises of empirical research (difficulties reproducing results), their causes (questionable research practices such as publication biases and p-hacking), and their relevance to the credibility of evidence-based policy (trust in policy analysis). The remainder of the paper makes the case for "open" policy analysis and how to achieve it. We include examples of...
more | pdf
Figures
Tweets
GrantSeanP: @fhoces from @UCBITSS presenting on Open Policy Analysis at #APPAM2018 Read our pre-print with @tedmiguel here! https://t.co/UpJ1mR2C9q https://t.co/5nv9Yv3FDk
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 7685
Unqiue Words: 2616

0.0 Mikeys
#7. Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition
Tom E Hardwicke, Maya Mathur, Kyle MacDonald, Gustav Nilsonne, George Christopher Banks, Mallory C. Kidwell, Alicia Hofelich Mohr, Elizabeth Clayton, Erica J. Yoon, Michael Henry Tessler
Access to data is a critical feature of an efficient, progressive, and ultimately self-correcting scientific ecosystem. But the extent to which in-principle benefits of data sharing are realized in practice is unclear. Crucially, it is largely unknown whether published findings can be reproduced by repeating reported analyses upon shared data (“analytic reproducibility”). To investigate, we conducted an observational evaluation of a mandatory open data policy introduced at the journal Cognition. Interrupted time-series analyses indicated a substantial post-policy increase in data available statements (104/417, 25% pre-policy to 136/174, 78% post-policy), although not all data appeared reusable (23/104, 22% pre-policy to 85/136, 62%, post-policy). For 35 of the articles determined to have reusable data, we attempted to reproduce 1324 target values. Ultimately, 64 values could not be reproduced within a 10% margin of error. For 22 articles all target values were reproduced, but 11 of these required author assistance. For 13 articles...
more | pdf
Figures
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 10
Total Words: 20760
Unqiue Words: 5776

0.0 Mikeys
#8. A Bayesian Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power
Quentin Frederik Gronau, Sara van Erp, Daniel W. Heck, Joseph Cesario, Kai Jonas, Eric-Jan Wagenmakers
Carney, Cuddy, and Yap (2010) found that --compared to participants who adopted constrictive body postures-- participants who adopted expansive body postures reported feeling more powerful, showed an increase in testosterone and a decrease in cortisol, and displayed an increased tolerance for risk. However, these power pose effects have recently come under considerable scrutiny. Here we present a Bayesian meta-analysis of six preregistered studies from this special issue, focusing on the effect of power posing on felt power. Our analysis improves on standard classical meta-analyses in several ways. First and foremost, we considered only preregistered studies, eliminating concerns about publication bias. Second, the Bayesian approach enables us to quantify evidence for both the alternative and the null hypothesis. Third, we use Bayesian model-averaging to account for the uncertainty with respect to the choice for a fixed-effect model or a random-effect model. Fourth, based on a literature review we obtained an empirically informed...
more | pdf
Figures
Tweets
Github

Bayesian Model Averaging for Random and Fixed Effects Meta-Analysis

Repository: metaBMA
User: danheck
Language: R
Stargazers: 2
Subscribers: 3
Forks: 2
Open Issues: 0
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 6
Total Words: 6572
Unqiue Words: 1582

0.0 Mikeys
#9. bridgesampling: An R Package for Estimating Normalizing Constants
Quentin Frederik Gronau, Henrik Singmann, Eric-Jan Wagenmakers
Statistical procedures such as Bayes factor model selection and Bayesian model averaging require the computation of normalizing constants (e.g., marginal likelihoods). These normalizing constants are notoriously difficult to obtain, as they usually involve high-dimensional integrals that cannot be solved analytically. Here we introduce an R package that uses bridge sampling (Meng and Wong 1996; Meng and Schilling 2002) to estimate normalizing constants in a generic and easy-to-use fashion. For models implemented in Stan, the estimation procedure is automatic. We illustrate the functionality of the package with three examples.
more | pdf
Figures
None.
Tweets
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 3
Total Words: 11662
Unqiue Words: 2994

0.0 Mikeys
#10. Preprint_vanAert_vanAssen18
Robbie C. M. van Aert, Marcel A.L.M. van Assen
Publication bias is a major threat to the validity of a meta-analysis resulting in overestimated effect sizes. P-uniform is a meta-analysis method that corrects estimates for publication bias, but the method overestimates average effect size in the presence of heterogeneity in true effect sizes (i.e., between-study variance). We propose an extension and improvement of the p-uniform method called p-uniform*. P-uniform* improves upon p-uniform in three important ways, as it (i) entails a more efficient estimator, (ii) eliminates the overestimation of effect size in case of between-study variance in true effect sizes, and (iii) enables estimating and testing for the presence of the between-study variance in true effect sizes. We compared the statistical properties of p-uniform* with the selection model approach of Hedges (1992) as implemented in the R package “weightr” and the random-effects model in both an analytical and a Monte-Carlo simulation study. Results revealed that the statistical properties of p-uniform* and the selection...
more | pdf
Figures
Tweets
ysTtsEVFDLcgPIY: RT @JelteWicherts: New version of P-uniform that corrects for publication bias in random effects meta-analyses. https://t.co/xVnVXUZnNf
Github
None.
Youtube
None.
Other stats
Sample Sizes : None.
Authors: 2
Total Words: 17610
Unqiue Words: 3178

About

Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day.

Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter).

To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else).

To see beautiful figures extracted from papers, follow us on Instagram.

Tracking 56,474 papers.

Search
Sort results based on if they are interesting or reproducible.
Interesting
Reproducible
Feedback
Online
Stats
Tracking 56,474 papers.