Scientific Research Bias
Bias is defined as "a mental leaning or inclination; partiality; prejudice; bent." Scientists are expected to be objective, and open to learning the truth from their research. Yet, physicists are also human. Each of us has our own likes and dislikes, preferences and preconceptions, and "hot buttons" that make us feel angry, uneasy, or uncomfortable.
Bias can damage research, if the researchers choose to allow bias to distort the measurements and observations or their interpretation. When faculty are biased about individual students in their courses, they may grade some students more or less favorably than others, which is not fair to any of the students. In a research group, favored students and colleagues may get the best assignments and helpful mentoring. People often prefer associating with other people who are similar to themselves, their family members, or their friends.
The net result of these biases hurts physics, because people who are different and would bring valuable new perspectives to the field have traditionally been excluded or discouraged by those already in the field. It is not unusual for women, African Americans, Hispanics, and Native Americans to feel unwelcome in physics and other scientific fields, because of the low expectations their professors and colleagues have for them, and because of how they are treated by the people who should be their peers and colleagues.
While it is probably impossible to eliminate bias, each person can strive to be aware of his or her preferences and alert to situations where the bias can be damaging to the science or ones colleagues. Also, one's can become a careful observer of others and take action to counteract the unfair or inappropriate consequences of biases, especially those that work to exclude or diminish people from different backgrounds than the majority.
As the primary purpose of scientific publication is to share ideas and new results to foster further developments in the field, the increasing prevalence of fraudulent research and retractions is of concern to every scientist since it taints the whole profession and undermines the basic premise of publishing.
While most scientists tend to dismiss the problem as being due to a small number of culprits - a shortcoming inherent to any human activity - there is a larger issue on the fringes of deception that is far more prevalent and of equal concern, where the adoption of certain practices can blur the distinction between valid research and distortion – between "sloppy science", "misrepresentation", and outright fraud.
Bias in research, where prejudice or selectivity introduces a deviation in outcome beyond chance, is a growing problem, probably amplified by:
the competitive aspects of the profession with difficulties in obtaining funding;
pressures for maintaining laboratories and staff;
the desire for career advancement (‘first to publish’ and ‘publish or perish’); and, more recently,
the monetization of science for personal gain.
Rather than being "disinterested contributors to a shared common pool of knowledge" (2), some scientists have become increasingly motivated to seek financial rewards for their work through industrial collaborations, consultancy agreements and venture-backed business opportunities; even to the exclusion of concerns regarding the accuracy, transparency and reproducibility in their science.
Bias tends to be obscured by the sheer volume of data reported. The number of publications in Life Sciences has increased 44% in the last decade, and at least one leading biomedical journal now publishes in excess of 40,000 printed pages a year. Data is generally viewed as a "key basis of competition, productivity growth...[and>... innovation", irrespective of its conception, quality, reproducibility and usability. Much of it, in the opinion of Sydney Brenner, has become "low input, high throughput, no output science".
Indeed, while up to 80% of research publications apparently make little contribution to the advancement of science - "sit[ting> in a wasteland of silence, attracting no attention whatsoever", it is disconcerting that the remaining 20% may suffer from bias as reflected in the increasing incidence of published studies that cannot be replicated or require corrections or retractions, the latter a reflection of the power of the Internet.
Categories of Bias
Although some 235 forms of bias have been analyzed, clustered and mapped to biomedical research fields, for the purposes of this brief synopsis, a cross-section of common examples are grouped into three categories:
1. Bias through ignorance can be as simple as not knowing which statistical test should be applied to a particular dataset, reflecting inadequate knowledge or scant supervision/mentoring. Similarly, the frequent occurrence of inappropriately large effect sizes observed when the number of animals used in a study is small, that subsequently disappear in follow-up studies that are more appropriately powered or when replication is attempted in a separate laboratory, may reflect ignorance of the significance of determining effect sizes and conducting power calculations .
The concern with disproportionate large effect sizes from small group sizes has been recognized by the National Institutes of Health (NIH), which now mandates power calculations validating the number of animals necessary to determine if an effect occurs before funding a program. However, this necessitates preliminary, exploratory analyses replete with caveats, which might not get revisited, and is not a requirement with many other funding agencies. Too often studies are published with the minimal number of animals necessary to plug into a Student's t-test software program (n=3) or based on 'experience' or history. Replication of any finding as a standard component of a study is absolutely critical, but rare.
2. Bias by design reflects critical features of experimental planning ranging from the design of an experiment to support rather than refute a hypothesis; lack of consideration of the null hypothesis; failure to incorporate appropriate control and reference standards; and reliance on single data points (endpoint, time point or concentration/dose). Of particular concern is the failure to perform experiments in a blinded, randomized fashion, which can result in 3.2- and 3.4-fold higher odds, respectively, of observing a statistically significant result when compared to studies that were appropriately blinded or randomized. While the impact of randomization might come as a surprise, since many animal studies are conducted in inbred strains with little heterogeneity, the opportunity to introduce bias into non-blinded experiments, even unintentionally, is very obvious. It is paramount that the investigator involved in data collection and analysis is unaware of the treatment schedule. How an outlier is defined and to be handled (e.g. dropped from the analysis), or what sub-groups are to be considered, must be established apriori and effected before the study is un-blinded. Despite its importance in limiting bias, one analysis of 290 animal studies and another of 271 publications revealed that 86-89% were not blinded.
Another important consideration in experimental design is the control of potentially confounding factors that can influence the experimental outcome indirectly. In the field of pharmacology, at a basic level this might include the importance of controlling blood pressure when conducting evaluations of compounds in preclinical studies of heart attack, stroke or thrombosis; or the recognition that most compounds lose specificity at higher doses; but consideration might also need to be given to other factors such as the significance of chronobiology (where, for example, many heart attacks occur within the first 3 hours of waking), referenced in.
3. Bias by misrepresentation. Researchers are an inherently optimistic group – the 'glass half full' is more likely brimming with champagne than tap water. Witness the heralding of the completion of the Human Genome Project or the advent of gene therapy, stem cells, antisense, RNAi, any "-omics" - all destined to have a major impact on eradicating disease in the near-term. This tendency for over-statement and over-simplification carries through to publications. The urge and rush to be first to publish a new "high-profile" finding can result in "sloppy science", but more significantly can be the result of a strong bias. Early replications tend to be biased against the initial findings, the Proteus phenomenon, although that bias is smaller than for the initial study. It is not clear which is more disturbing – the level of bias and selective reporting found to occur in the initial studies; the finding that ~70% of follow-on studies contradict the original observation; or that it is so common and well-recognized a phenomenon that it even has a name.
A recent evaluation of 160 meta-analyses involving animal studies covering six neurological conditions, most of which were reported to show statistically significant benefits of an intervention, found that the "success rate" was too large to be true and that only 8 of the 160 could be supported, leading to the conclusion that reporting bias was a key factor .
The retrospective selection of data for publication can be influenced by prevailing wisdom promoting expectations for particular outcomes, or, where the benefit of hindsight at the conclusion of a study allows an uncomplicated sequence of events to be traced and promulgated, as the only conclusion possible.
While research misconduct in terms of overt fraud and plagiarism is a topic with high public visibility, it remains relatively rare in research publications while data manipulation, data selection and other forms of bias are increasingly prevalent. Whether intentional, the result of inadequate training or due to a lack of attention to quality controls, they foster an approach and attitude that blurs the distinction between necessary scientific rigor and deception, and probably contribute substantially to the poor reproducibility of biomedical research findings.
Scientific bias represents a proverbial "slippery slope", from the subjectivity of "sloppy science" and lack of replication to the deliberate exclusion or non-reporting of data to outright fabrication . Plagiarism, distortion of data or its interpretation, physical manipulation of data, e.g., western blots, NMR spectra to make the outcomes more visually appealing or obvious (often ascribed to the seductive simplicity of PowerPoint and the ease of manipulation with Photoshop), and blatant duplicity in the biopharma industry in the selective sharing of clinical trial outcomes with inconclusive/negative trials often not reported, all contribute to the expanding concerns regarding scientific integrity and transparency.
This is an issue that obviously increases in importance as the outcomes of investigator bias impact the expenditure of millions of dollars on research programs that are progressed based on data presented; where inappropriate New Chemical Entities are advanced into clinical trials also exposing patients to undue risk; and unvalidated biomarkers are promoted to an anxious and misinformed public.
Correcting Bias
With the increase in bias, data manipulation and fraud, the role of the journal editor has become more challenging, both from a time perspective and with regards to avoiding peer-review bias. And while keeping the barriers high, much of the process still depends on the integrity and ethics of the authors and their institutions. It is paramount that institutions, mentors and researchers promote high ethical standards, rigor in scientific thought and ongoing evaluations of transparency and performance that meet exacting guidelines. Clinical trials with a full protocol defining size of the study, randomization, dosing, blinding and endpoints have to be registered before the study can begin, and, at the conclusion of the study, every patient has to be accounted for and included in the analysis. A proposal has been made that non-clinical studies should adopt the same standards and, while not a requirement, such guidelines provide a useful rule of thumb to consider when designing any study. These topics, and their impact on the translation of research findings to the clinic, will be discussed in greater detail in an upcoming article in Biochemical Pharmacology.
The time has come for the ME community to face the facts of the dangers of the current NIH's leadership understanding and beliefs about ME. This is also why as outlined in MEadvocacy's Blog, it is crucial to use the International Experts Criteria ICC 2011, to disguish the Neuroimmune disease ME, as opposed to the "fatigue condition" as described by other criteria.
In the preface to the ICC, its authors explain the need for accurate definitions: “There is a poignant need to untangle the web of confusion caused by mixing diverse and often overly inclusive patient populations in one heterogeneous, multi-rubric pot called ‘chronic fatigue syndrome’. We believe this is the foremost cause of diluted and inconsistent research findings, which hinders progress, fosters skepticism, and wastes limited research monies.”
The importance of selecting the correct patient group in criteria for the disease has been highlighted in the newest paper by Norwegian scientists titled: ‘What exactly is Myalgic Encephalomyelitis’. The paper states: “The use of broad inclusion criteria has created a heterogeneous patient population, also within research. This has increased the risk of erroneous conclusions, misdiagnosis and incorrect treatment. For Myalgic Encephalomyelitis, the Canadian criteria and the International Consensus Criteria have in our view increased the accuracy of diagnosis due to their greater specificity and clearer delineation of the disorder from other forms of fatigue.” https://www.meadvocacy.org/analysis_of_cfsac_august_2015_recommendations_for_the_iom_criteria
Strategies for overcoming scientific challenges or barriers to progress in ME and CFS research
Since most physicians and public at large do not believe the illness is crippling and "real", the illness must be taught at medical schools and more ways to get the information out to the public is needed. Only then, will researchers possibly be interested in using their research skills and time on ME or CFS. I think if researchers knew the extent of the suffering of severely affected patients they would be interested.
NIH needs to take the lead and undo 30+ years of negligence by issuing RFAs for ME and CFS. The amount of funding ME should be getting annually, to be commensurate with disease burden, is $200 million. Currently, we're getting a drop in the bucket. Please see the following study: https://www.oatext.com/Estimating-the-disease-burden-of-MECFS-in-the-United-States-and-its-relation-to-research-funding.php
Better defining. Things need to be more clearly defined. WHO codes need to be used more. Currently the CDC website is a mess. Loads of conditions lead to Chronic Fatigue. That is not the same as an enterovirus attacking the central nervous system leading to brainstem encephalitis or encephalomyelitis.
The biggest barrier is the paradigm of it being a psychological illness that prevails - the research needs to provide more evidence based biomedical research that is more compelling than the (often discredited) psychological/behavioural studies. The research needs to be joined up - not carried out in isolation. Researchers are finding some similar things and these should then inform the future direction of research - rather than individual studies getting lost along the way. Increasing pharmaceutical interest in the illness. Stop the stigma that ME. Educate doctors that it’s a neuro-immune disease with dysfunction in the immune, neuro, endocrine, metabolic systems. Educate doctors on the primary symptom of Post Exertion Neuroimmune Exhaustion, stress early intervention of rest and to never push beyond energy capacity.
Most importantly, patients need to be more proactive when it comes to health care. This isn’t a popularity contest and there are a lot of opportunists who are involved in making money from a vulnerable group of very sick people. Instead of automatically following the leader – try researching on your own and learn to understand that Myalgic Encephalomyelitis is the disease that everyone knows about but chooses to ignore, neglect & bully to prevent a shortfall on the donations they are asking for. It's not as prevalent as CFS or CFS/ME, SEID/IOM, CDC, Oxford, Fukuda, Reeves, etc..... because Me-ICC is a distinct disease that has been included in the WHO ICD since 1969 and "encephalomyelitis" must be present to meet the requirements of this diagnosis.
References
(1) Stemwedel JD, “The continuum between outright fraud and "sloppy science": inside the frauds of Diederik Stapel (part 5)”, Scientific American June 26, 2013.
(2) Felin T, Hesterly WS, "The Knowledge-Based View, Nested Heterogeneity, And New Value Creation: Philosophical Considerations On The Locus Of Knowledge", Acad. Management Rev 2007, 32: 195–218.
(3) Manyika J, Chui M, Brown B, Bughin J, Dobbs R, Roxburgh C, Byers AH, “Big data: The next frontier for innovation, competition, and productivity“, McKinsey Global Institution, April 2011.
(4) Brenner S, “An interview with... Sydney Brenner”, Interview by Errol C. Friedberg, Nat Rev Mol Cell Biol 2008; 9:8-9.
(5) Mandavilli A, “Peer review: Trial by Twitter”, Nature 2011; 469, 286-7.
(6) Prinz F, Schlange T, Asadullah K, “Believe it or not: how much can we rely on published data on potential drug targets?”, Nature Rev Drug Discov 2011; 10: 712-3.
(7) Begley CG, Ellis LM, “Drug development: Raise standards for preclinical cancer research“, Nature 2012, 483, 531-533.
(8) Steen RG, Casadevall A, Fang FC, “Why has the number of scientific retractions increased?“, PLoS ONE 2013: 8: e68397.
(9) Chavalarias D, Ioannidis JPA, “Science mapping analyses characterizes 235 biases in biomedical research”, J Clin Epidemiol 2010; 63: 1205-15.
(10) Ioannidis JPA, “Why most published research findings are false“, PLoS Med 2005: e124.
(11) Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, et al., “Power failure: why small sample size undermines the reliability of neuroscience”, Nat Rev Neurosci 2013; 14: 365-76.
(12) Henderson VC, Kimmelman J, Fergusson D, Grimshaw JM, Hackam DG, “Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments“, PLoS Med 2013: e1001489.
(13) Sean ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR, “Publication bias in reports of animal stroke studies leads to major overstatement of efficacy“, PLoS Biol 2010; 8: e1000344.
(14) Kilkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, et al., “Survey of the quality of experimental design, statistical analysis and reporting of research using animals“, PLoS One 2009; 4: e7824.
(15) Wadman M, “NIH mulls rules for validating key results”, Nature 2013: 500:14-6.
(16) Bebarta V, Luyten D, Heard K, “Emergency medicine animal research: does use of randomization and blinding affect the results?”, Acad Emerg Med 2003; 10; 684-7.
(17) Pfeiffer T, Bertram L, Ioannidis JPA, “Quantifying selective reporting and the Proteus Phenomenon for multiple datasets with similar bias“, PLoS One 2011; 6: e18362.
(18) Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, et al., “Evaluation of excess significance bias in animal studies of neurological diseases“, PLoS Biol 2013; 11: e1001609.
(19) Kakuk P, “The Legacy of the Hwang Case: Research Misconduct in Biosciences”, Sci Engineer Ethics 1; 2009: 645-62