Null resultIn science, a null result is a result without the expected content: that is, the proposed result is absent. It is an experimental outcome which does not show an otherwise expected effect. This does not imply a result of zero or nothing, simply a result that does not support the hypothesis. In statistical hypothesis testing, a null result occurs when an experimental result is not significantly different from what is to be expected under the null hypothesis; its probability (under the null hypothesis) does not exceed the significance level, i.
Science policyScience policy is concerned with the allocation of resources for the conduct of science towards the goal of best serving the public interest. Topics include the funding of science, the careers of scientists, and the translation of scientific discoveries into technological innovation to promote commercial product development, competitiveness, economic growth and economic development. Science policy focuses on knowledge production and role of knowledge networks, collaborations, and the complex distributions of expertise, equipment, and know-how.
Evidence-based practiceEvidence-based practice (EBP) is the idea that occupational practices ought to be based on scientific evidence. While seemingly obviously desirable, the proposal has been controversial, with some arguing that results may not specialize to individuals as well as traditional practices. Evidence-based practices have been gaining ground since the formal introduction of evidence-based medicine in 1992 and have spread to the allied health professions, education, management, law, public policy, architecture, and other fields.
Scholarly peer reviewScholarly peer review or academic peer review (also known as refereeing) is the process of having a draft version of a researcher's methods and findings reviewed (usually anonymously) by experts (or "peers") in the same field. Peer review is widely used for helping the academic publisher (that is, the editor-in-chief, the editorial board or the program committee) decide whether the work should be accepted, considered acceptable with revisions, or rejected for official publication in an academic journal, a monograph or in the proceedings of an academic conference.
Scientific communityThe scientific community is a diverse network of interacting scientists. It includes many "sub-communities" working on particular scientific fields, and within particular institutions; interdisciplinary and cross-institutional activities are also significant. Objectivity is expected to be achieved by the scientific method. Peer review, through discussion and debate within journals and conferences, assists in this objectivity by maintaining the quality of research methodology and interpretation of results.
Center for Open ScienceThe Center for Open Science is a non-profit technology organization based in Charlottesville, Virginia with a mission to "increase the openness, integrity, and reproducibility of scientific research." Brian Nosek and Jeffrey Spies founded the organization in January 2013, funded mainly by the Laura and John Arnold Foundation and others. The organization began with work in reproducibility of psychology research, with the large-scale initiative Reproducibility Project: Psychology.
Invalid scienceInvalid science consists of scientific claims based on experiments that cannot be reproduced or that are contradicted by experiments that can be reproduced. Recent analyses indicate that the proportion of retracted claims in the scientific literature is steadily increasing. The number of retractions has grown tenfold over the past decade, but they still make up approximately 0.2% of the 1.4m papers published annually in scholarly journals. The U.S. Office of Research Integrity (ORI), investigates scientific misconduct.
Replication crisisThe replication crisis (also called the replicability crisis and the reproducibility crisis) is an ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.
Reproducibility ProjectThe Reproducibility Project: Psychology was a crowdsourced collaboration of 270 contributing authors to repeat 100 published experimental and correlational psychological studies. This project was led by the Center for Open Science and its co-founder, Brian Nosek, who started the project in November 2011. The results of this collaboration were published in August 2015. Reproducibility is the ability to produce the same findings, using the same methodologies as the original work, but on a different dataset (for instance, collected from a different set of participants).
Misuse of p-valuesMisuse of p-values is common in scientific research and scientific education. p-values are often used or interpreted incorrectly; the American Statistical Association states that p-values can indicate how incompatible the data are with a specified statistical model. From a Neyman–Pearson hypothesis testing approach to statistical inferences, the data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected (which however does not prove that the null hypothesis is false), or the null hypothesis cannot be rejected at that significance level (which however does not prove that the null hypothesis is true).