zero-sum

by Rupert Sheldrake | Nov 10, 2023

Around 2015, scientists were shocked to find that most papers in high-prestige peer-reviewed scientific journals are not reproducible. In one study of papers in prestigious biomedical journals, 90% could not be replicated, and in experimental psychology more than 60%.

This crisis partly arises from systematic biases that Rupert discusses in his chapter on ‘Illusions of Objectivity’ in The Science Delusion (2012, new edition 2020; in the US this book is called Science Set Free), including the selective observation and reporting of results, and perverse incentives for scientists and journals to publish striking positive findings.

The crisis continues to roll on, as shown, for example, by an editorial in Nature, December 2021, about un-reproducible results in cancer biology.

All this is relatively straightforward, but Rupert suggests that some experiments may also involve direct mind-over-matter effects. It has long been known that experimenters can influence their experimental results through their expectations, in so-called ‘experimenter expectancy effects’, which is why many clinical trials, psychological and parapsychological experiments are carried out under blind or double-blind conditions.

In most other fields of science, experimenter effects are ignored and blind methodologies are rarely employed. Rupert suggests that in addition to the usual sources of bias, experimenters may also influence experiments psychokinetically, through direct mind-over-matter effects. Scientists may be particularly prone to this source of error because most scientists believe psychokinesis is impossible, and hence take no precautions against it. They practise unprotected science.

Rupert proposes experiments on experiments to test for the effects of experimenters’ hopes and expectations.

The Replicability Crisis in Science

by Rupert Sheldrake | Nature | Sep 1, 2015

The world of science is in the midst of unprecedented soul-searching at present. The credibility of science rests on the widespread assumption that results are replicable, and that high standards are maintained by anonymous peer review. These pillars of belief are crumbling. In September 2015, the international scientific journal Nature published a cartoon showing the temple of “Robust Science” in a state of collapse. What is going on?

Drug companies sounded an alarm several years ago. They were concerned that an increasing proportion of clinical trials was failing, and that much of their research effort was being wasted. When they looked into the reasons for their lack for success, they realized that they were basing projects on scientific papers published in peer-reviewed journals, on the assumption that most of the results were reliable. But when they looked more closely, they found that most of these papers, even those in top-tier academic journals, were not reproducible. In 2011, German researchers in the drug company Bayer found in an extensive survey that more than 75% of the published findings could not be validated.

In 2012, scientists at the American drug company Amgen published the results of a study in which they selected 53 key papers deemed to be “landmark” studies and tried to reproduce them. Only 6 (11%) could be confirmed.

In 2012, the governments of the world’s richer countries spent $59 billion on biomedical research, one justification for which is that basic-science research provides the foundations for work by private drug companies. So this is not a trivial problem. Meanwhile, by 2013, in the realm of experimental psychology, as in other branches of science, there were alarming signs that much of the published research could not be replicated. A large-scale replication study by psychologists published last year sent further shock waves through the scientific world when it turned out that around two-thirds of the published studies in top psychology journal were not reproducible.

In the late nineteenth century, many scientists adopted a style of writing using the passive voice, “A test tube was taken….” instead of “I took a test tube…” to create as impersonal a style as possible, a world of emotion-free events unfolding spontaneously in front of a detached objective observer.

In reality, of course, scientists are people, and like other people have different temperaments and personalities from each other, are often competitive, and prefer their own hypothesis to be right rather than wrong. In most branches of science, scientists publish only a small percentage of their data, 10% or less, and obviously select the “best” results to publish, leaving inconvenient or inconclusive data unpublished. The problem is made worse by a systematic bias against replications within the sciences. Researchers who replicate other people’s work find it hard, if not impossible, to get their papers published, because replication is not deemed to be original, and most journals pride themselves on publishing original research.

Unfortunately, personal advancement in the world of science depends on incentives that encourage these questionable research practices. Professional scientists’ career prospects, promotions and grants depend on the number of papers they have published, the number of times they are cited and the prestige of the journals in which they are published. There are therefore powerful incentives for people to publish eye-catching papers with striking positive results. If other researchers cannot replicate the results, this may not be discovered for years, if it is discovered at all, and meanwhile their careers have advanced and the system perpetuates itself. In the world of business, the criteria for success depend on running a successful business, not on whether business plans are ranked highly by business academics, and whether they are often cited in business journals. But status in the world of science depends on publications in scientific journals, rather than on practical effects in the real world.

Meanwhile, the peer-review system is falling into disrepute. The very fact that so many unreliable papers are published shows that the system is not working effectively, and a recent investigation by the American journal Science revealed some shocking results. A member of Science’s staff wrote a spoof paper, riddled with scientific and statistical errors, and sent 304 versions of it to a range of peer-reviewed journals. It was accepted for publication by more than half of them.

Obviously the present system of academic research encourages the publication of false positive results. At the same time, the huge financial incentives that underlie the multi-billion dollar drug industry encourage the suppression of negative results. Many drug companies simply do not publish the results of negative studies that show their drugs are ineffective. On the other hand, of course they publish the results of positive studies that favour their drugs. Insofar as “evidence-based medicine” relies on published studies, it creates a very misleading impression of scientific objectivity, reflecting a strong bias based on the commercial self-interest of pharmaceutical corporations.. Such practices are all too common, as Ben Goldacre shows in his book Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients (2012).

The psychologist Nicholas Humphrey has compared this “sub-prime science” crisis to the financial crisis of 2008. The implications of this crisis are far-reaching, because science is so important for our civilisation and economy. There is now an unprecedented mood of humility within the sciences. Whether there will be serious changes, or simply a reversion to business-as-usual, remains to be seen.

Subscribe to Rupert Sheldrake

References

A Dream, or the Astronomy of the Moon Johann Kepler, published posthumously in 1634 by his son https://cdr.creighton.edu/items/f78cee20-64e9-4501-bd4b-c6899078ca01

Bad Pharma Ben GoldacreFourth Estate, 2012 https://www.goodreads.com/book/show/15795155-bad-pharma

Artifacts in Behavioral Research Robert Rosenthal and Ralph L. Rosnow, Oxford University Press, 2009 https://www.amazon.com/Artifacts-Behavioral-Research-Rosenthal-Rosnows-ebook-dp-B004JLO46M/dp/B004JLO46M/

Over half of psychology studies fail reproducibility test https://www.nature.com/articles/nature.2015.18248 

Differential indoctrination of examiners and Rorschach responses https://psycnet.apa.org/record/1965-12396-001

A longitudinal study of the effects of experimenter bias on the operant learning of laboratory rats https://psycnet.apa.org/record/1965-01547-001 

Could Experimenter Effects Occur in the Physical and Biological Sciences? Skeptical Inquirer 22(3), 57-58 May / June 1998 https://www.sheldrake.org/research/experimenter-effects/could-experimenter-effects-occur-in-the-physical-and-biological-sciences

Quantum‐Mechanical Random‐Number Generator https://pubs.aip.org/aip/jap/article-abstract/41/2/462/502759/Quantum-Mechanical-Random-Number-Generator?redirectedFrom=fulltext

0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Contact Us

Privacy Policy

Sitemap

© 2024 FM Media Enterprises, Ltd.