Research bias, also called experimenter bias, is a process where the scientists performing the research influence the results, in order to portray a certain outcome. Some bias in research arises from experimental error and failure to take into account all of the possible variables. Other bias arises when researchers select subjects that are more likely to generate the desired results, a reversal of the normal processes governing science. Bias is the one factor that makes qualitative research much more dependent upon experience and judgment than quantitative research.
Acceptance and Acknowledgment of Bias. For example, when using social research subjects, it is far easier to become attached to a certain viewpoint, jeopardizing impartiality. The main point to remember with bias is that, in many disciplines, it is unavoidable. Any experimental design process involves understanding the inherent biases and minimizing the effects.
In quantitative research, the researcher tries to eliminate bias completely whereas, in qualitative research, it is all about understanding that it will happen.
Design bias is introduced when the researcher fails to take into account the inherent biases liable in most types of experiment. Some bias is inevitable, and the researcher must show that they understand this, and have tried their best to lessen the impact, or take it into account in the statistics and analysis. Another type of design bias occurs after the research is finished and the results analyzed. This is when the original misgivings of the researchers are not included in the publicity, all too common in these days of press releases and politically motivated research. For example, research into the health benefits of Acai berries may neglect the researcher’s awareness of limitations in the sample group. The group tested may have been all female, or all over a certain age.
Sampling bias occurs when the process of sampling actually introduces an inherent bias into the study. There are two types of sampling bias, based around those samples that you omit, and those that you include:
Omission Bias This research bias occurs when certain groups are omitted from the sample. An example might be that ethnic minorities are excluded or, conversely, only ethnic minorities are studied. For example, a study into heart disease that used only white males, generally volunteers, cannot be extrapolated to the entire population, which includes women and other ethnic groups. Omission bias is often unavoidable, so the researchers have to incorporate and account for this bias in the experimental design.
Inclusive Bias Inclusive bias occurs when samples are selected for convenience. This type of bias is often a result of convenience where, for example, volunteers are the only group available, and they tend to fit a narrow demographic range. There is no problem with it, as long as the researchers are aware that they cannot extrapolate the results to fit the entire population. Enlisting students outside a bar, for a psychological study, will not give a fully representative group.
Procedural bias is where an unfair amount of pressure is applied to the subjects, forcing them to complete their responses quickly. For example, employees asked to fill out a questionnaire during their break period are likely to rush, rather than reading the questions properly. Using students forced to volunteer for course credit is another type of research bias, and they are more than likely to fill the survey in quickly, leaving plenty of time to visit the bar.
Measurement bias arises from an error in the data collection and the process of measuring. In a quantitative experiment, a faulty scale would cause an instrument bias and invalidate the entire experiment. In qualitative research, the scope for bias is wider and much more subtle, and the researcher must be constantly aware of the problems. Subjects are often extremely reluctant to give socially unacceptable answers, for fear of being judged. For example, a subject may strive to avoid appearing homophobic or racist in an interview. This can skew the results, and is one reason why researchers often use a combination of interviews, with an anonymous questionnaire, in order to minimize measurement bias. Particularly in participant studies, performing the research will actually have an effect upon the behavior of the sample groups. This is unavoidable, and the researcher must attempt to assess the potential effect.
Instrument bias is one of the most common sources of measurement bias in quantitative experiments. This is the reason why instruments should be properly calibrated, and multiple samples taken to eliminate any obviously flawed or aberrant results.
Interviewer Bias This is one of the most difficult research biases to avoid in many quantitative experiments when relying upon interviews. With interviewer bias, the interviewer may subconsciously give subtle clues in with body language, or tone of voice, that subtly influence the subject into giving answers skewed towards the interviewer’s own opinions, prejudices and values.
Any experimental design must factor this into account, or use some form of anonymous process to eliminate the worst effects.
Response Bias Conversely, response bias is a type of bias where the subject consciously, or subconsciously, gives response that they think that the interviewer wants to hear. The subject may also believe that they understand the experiment and are aware of the expected findings, so adapt their responses to suit. Again, this type of bias must be factored into the experiment, or the amount of information given to the subject must be restricted, to prevent them from understanding the full extent of the research.
Reporting Bias Reporting Bias is where an error is made in the way that the results are disseminated in the literature. With the growth of the internet, this type of bias is becoming a greater source of concern. The main source of this type of bias arises because positive research tends to be reported much more often than research where the null hypothesis is upheld. Increasingly, research companies bury some research, trying to publicize favorable findings. Unfortunately, for many types of studies, such as meta-analysis, the negative results are just as important to the statistics.
Example of CFS Bias #1
Cross-Cultural Study of Information Processing Biases in Chronic Fatigue Syndrome: Comparison of Dutch and UK Chronic Fatigue Patients. An attention and interpretation bias for illness-specific information in ... https://www.ncbi.nlm.nih.gov › pubmed by AM Hughes - 2017 - Related articles. Nov 29, 2016 - This study investigated whether CFS participants had an attentional bias for CFS-related stimuli and a tendency to interpret ambiguous ...
PURPOSE: This study aims to replicate a UK study, with a Dutch sample to explore whether attention and interpretation biases and general attentional control deficits in chronic fatigue syndrome (CFS) are similar across populations and cultures.
METHOD: Thirty eight Dutch CFS participants were compared to 52 CFS and 51 healthy participants recruited from the UK. Participants completed self-report measures of symptoms, functioning, and mood, as well as three experimental tasks (i) visual-probe task measuring attentional bias to illness (somatic symptoms and disability) versus neutral words, (ii) interpretive bias task measuring positive versus somatic interpretations of ambiguous information, and (iii) the Attention Network Test measuring general attentional control.
RESULTS: Compared to controls, Dutch and UK participants with CFS showed a significant attentional bias for illness-related words and were significantly more likely to interpret ambiguous information in a somatic way. These effects were not moderated by attentional control. There were no significant differences between the Dutch and UK CFS groups on attentional bias, interpretation bias, or attentional control scores.
CONCLUSION: This study replicated the main findings of the UK study, with a Dutch CFS population, indicating that across these two cultures, people with CFS demonstrate biases in how somatic information is attended to and interpreted. These illness-specific biases appear to be unrelated to general attentional control deficits.
KEYWORDS: Attentional bias; Attentional control; Chronic fatigue syndrome; Cross-cultural study; Interpretation bias
Why the Bias in ME/CFS
How have selection bias and disease misclassification undermined the validity of myalgic encephalomyelitis/chronic fatigue syndrome studies? Nacul L1, Lacerda EM1, Kingdon CC1, Curran H1, Bowman EW1. Author information 1. London School of Hygiene and Tropical Medicine, UK. March 2017 https://www.ncbi.nlm.nih.gov/pubmed/28810428
Abstract Myalgic Encephalomyelitis / ChronicFfatigue Syndrome has been a controversial diagnosis, resulting in tensions between patients and professionals providing them with care. A major constraint limiting progress has been the lack of a 'gold standard' for diagnosis; with a number of imperfect clinical and research criteria used, each defining different, though overlapping, groups of people with Myalgic Encephalomyelitis or Chronic Fatigue Syndrome. We review basic epidemiological concepts to illustrate how the use of more specific and restrictive case definitions could improve research validity and drive progress in the field by reducing selection bias caused by diagnostic misclassification.
KEYWORDS: chronic fatigue syndrome; diagnosis; epidemiology; misclassification; myalgic encephalomyelitis/chronic fatigue syndrome; selection bias
Bias in CFS example #2
Attentional and interpretive bias towards illness-related information in chronic fatigue syndrome: A systematic review. Alicia Maria Hughes, Colette R Hirsch, +1 author Rona Moss-MorrisPublished in British journal of health psychology 2016. DOI:10.1111/bjhp.12207. https://www.semanticscholar.org/paper/Attentional-and-interpretive-bias-towards-in-A-Hughes-Hirsch/44514224ed0986699270a0f89868092b7364b6c6
Treatment and management of chronic fatigue syndrome/myalgic ... https://www.ncbi.nlm.nih.gov › pmc › articles › PMC5301046 by J Castro‐Marrero - 2017 - Cited by 27 - Related articles
Feb 1, 2017 - ... with a high risk of bias, and have used different case definitions. ... 2015 NIH/Institute of Medicine (IOM) definition (SEID) diagnostic criteria ... At present, no firm conclusions can be drawn because the few RCTs undertaken to date have been small‐scale, with a high risk of bias, and have used different case definitions. Further, RCTs are now urgently needed with rigorous experimental designs and appropriate data analysis, focusing particularly on the comparison of outcomes measures according to clinical presentation, patient characteristics, case criteria and degree of disability (i.e. severely ill ME cases or bedridden).
The claim that CFS and ME are distinct clinical entities is controversial. In this comprehensive review, we will apply the term CFS/ME pragmatically. The recently proposed 2015 NIH/Institute of Medicine (IOM) definition (SEID) diagnostic criteria developed by the US IOM redefine CFS/ME for clinical applications. The IOM recommended that the name of the illness be changed from CFS/ME to SEID (Clayton, 2015). All CFS/ME (SEID) case definitions are assessed in terms of sensitivity (i.e. the ability to identify CFS/ME patients correctly) and specificity (i.e. the ability to exclude patients who do not have CFS/ME). Subgroup analysis suggests that, depending on the case definition applied, the CFS/ME (SEID) population may represent a variety of conditions rather than a single disease entity. If patient samples include participants with different conditions, it is impossible to determine the core domains or symptoms or to apply proper treatment strategies. So, it is essential to identify patient subsets correctly in order to implement personalized treatments; failure to do so will also have detrimental consequences for research in the interpretation of epidemiological, aetiological factors and treatment (Bested and Marshall, 2015).
Replacing Myalgic Encephalomyelitis and Chronic Fatigue Syndrome with Systemic Exercise Intolerance Disease Is Not the Way forward. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4808825/
The Pre-Assumption that Myalgic Encephalomyelitis (ME) and Chronic Fatigue Syndrome (CFS) Denote “Similar Conditions” Is Invalid. According to the IOM, a diagnosis of SEID should be made if three and the risk of bias tool showed low reliability between individual reviewers.
Replacing ME and CFS by a third clinical entity does not resolve the diagnostic and scientific impasse. SEID is based upon the invalid premise that ME and CFS are similar conditions, a thorough analysis of scientific literature that mainly relates to CFS, and consensus. As a logical consequence, SEID does not capture the essence of ME. Moreover the diagnostic criteria of SEID are not more restrictive than the CFS criteria and also apply to people with other medical and psychological conditions. Self-report and subjective measures, as proposed by the IOM, are not adequate for diagnosis and assessment of the clinical status of patients in research.
To resolve the diagnostic impasse and to improve the quality of research,
(a) the scientific community should acknowledge that much of the confusion originates from merging two clinical entities, ME and CFS, into a “hybrid diagnosis” (ME/CFS);
(b) symptoms should be assessed by objective measures, not by self-report only;
(c) pattern recognition methods should be used to establish the optional symptoms of ME, and to reveal “symptom clusters”/disorders covered by the “umbrella diagnosis” CFS, taking into account confounding variables, like onset, and duration of illness; and
(d) diagnostic labels should preferably reflect the clinical picture.