Contact Us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right. 

           

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

Impish Iggies

HOW TO BE BIASED (IN A GOOD WAY)

Tatum Clinton Miller

Recently, I came across a new criticism:  I’m biased and I have an agenda.

Yes, indeed, I am biased! But this comment left out a very important fact: so is everyone else!

Though few people consciously realize it, the way our brains are wired predisposes us to be biased. This is actually a pretty useful feature, as it allows us to respond instinctively to potentially dangerous stimuli and navigate the world with ease using only our relatively minimal, fleeting perceptions. However, it can cause problems too. Sometimes the way our brains process our perceptions is inaccurate and deceiving. There’s a great National Geographic show about this phenomenon called Brain Games, which is available to view online via Netflix.

Even research scientists, who many might consider to be the most objective, unbiased people of all, are still human and still fall into these traps of expectation plus perception creating a distorted view of reality. This is why James Randi, magician and skeptic extraordinaire, was able to easily debunk many cases of alleged paranormal phenomena which fooled trained scientists. With his background as a performing magician, his set of preconceived notions about how to analyze potential illusions were quite different from those of a scientist, which is what gave him an edge when uncovering the tricks responsible for these “paranormal” phenomena. (For those who are interested in learning more about Randi and the phenomena he debunked, there’s a fascinating documentary about him called An Honest Liar which is also available on Netflix.)

So, since we are all naturally biased on a basic level, how can we distinguish unfairly biased writing and research as objectively as possible? Here’s a few tips from yours truly:

1) Evaluate each argument, article, or study based on its own merits. A common trend is for people to flat-out refuse to read an article based solely on their previous conceptions about the author. It’s a good thing to be especially skeptical when evaluating content written by someone who is known to have written flawed work before. However, completely refusing to even consider evaluating said author’s individual works is unfairly biased on the reader’s part. There may be lot you’ll disagree with, but there may also be some important nuggets of truth buried in the rubbish that you would miss entirely by refusing to read a particular work based on your preconceptions about the author.

For example, I disagree with quite a lot of things that practitioners of “alternative” veterinary medicine propose, but a few of their nontraditional views are scientifically-founded and on point. For example, many alternative vets are strongly opposed to pediatric/early neutering due to its harmful effects, and push for vaccinating triannually for some vaccines like rabies and DAPP/DHPP rather than the common practice of annual vaccination. In both these cases, there is scientific research that supports these conclusions. However, I do tend to disagree with them on most other points due to a lack of supporting research for their claims. If a time comes when research studies begin to provide such supporting evidence, then I would gladly reconsider my opinions on those topics.

My favorite example of this is Noam Chomsky: a ground-breaking linguist whose work is essential background reading for every linguistics student, but an absurd political scientist whose wild political views are best not taken too seriously!

2) When evaluating an article, look closely at the sources. If an article has no sources whatsoever, it’s either an opinion piece or an anecdote (or both). These can be useful in initiating further research and discussion, but should never be taken as hard science or cited as such.

If an article does cite sources, pay attention to four things in particular:

a) Credibility. If an article cites anecdotal opinion pieces as if they were scientific proof, don’t take them as scientific proof! For actual research studies, the next section focuses on how to evaluate those in particular.

b) Quantity. The more valid sources an article cites, the better. If an article hinges on a single source alone, you need to take a long, hard look at that single source to determine whether it’s a quality source which actually supports all the arguments the article citing it claims it does.

c) Diversity. If an article cites a lot of works from the same source, it’s vital to determine the quality of not only the individual works cited but also the credibility of that main source. If that source is something like a blog or website which focuses on a specific worldview or philosophy (e.g. alternative medicine, animal rights, anti-breeder agenda), there’s a good chance that that source is unfairly biased in one particular direction and presents only one side of each story. If that source is a peer-reviewed scientific journal, it’s more likely that it simply happens to be a reputable publication with a lot of studies on specific topics which may not be covered by other scientific journals. In any case, the larger and more diverse the supporting evidence presented, the more accurate and balanced an article is likely to be.

d) Relevance. If an article makes bold claims, there should be supporting evidence for those claims which actually supports said claims. Sometimes, an article may cite a source which provides some evidence for a part of a claim, but the source may draw completely different conclusions about that evidence than the article that cites it. If this is the case, the author needs to present additional evidence which gives credence to the author’s refusal to accept any aspect(s) of the conclusion of the first source they cite as supporting evidence. If an article neglects to cite evidence for larger claims and only cites evidence for smaller claims that are possibly irrelevant non-sequiturs respective to the writer’s larger conclusion(s), it’s reasonable to doubt the author’s unsupported conclusion(s).

3) Don’t take research studies for granted as infallible sources of scientific fact. Studies can be flawed or even fabricated, so while they should be considered more credible sources than anecdotal opinion pieces, they too should be evaluated with skepticism. This is especially true if the author of a piece hinges their entire argument on a single study (or a handful of studies by the same author(s)) or if the author claims that aspects of the study are flawed, misleading, or fraudulent. Here are a few ways to critically evaluate research studies:

a) Pay attention to the date of publication. If a study is decades old, there’s a fair chance that some of its conclusions are outdated and have been tweaked or even totally contradicted by newer studies. This is notalways the case, as some ground-breaking studies remain important and relevant many years after publication. Other times, the topic of the research is so specialized that there may not be any newer relevant research to compare with it. However, it’s still a good idea to do a little digging to see if more recent research also supports the study’s claims and take note if there have been any new revelations in the topic since the publication of an older study.

b) Evaluate the research methods. Sometimes, a study might be too technical for the average layperson to fully critically evaluate, but a number of basic best practices in research methods will be largely the same. Does the study use a sufficiently large sample size? Are participants selected randomly or through another method? Is there a control group? How well does the study control for confounding factors (e.g. implementing a double-blind research design when appropriate, or adjusting statistical analysis to control for confounding factors)? Are some potentially critical confounding factors not controlled for at all? What practical limitations do the authors disclose? If you’re unsure what’s considered the standard procedure for any of these elements in a particular field of research, browse through other studies in the field to get a better idea of this.

c) Look at what other studies cite a particular study, and how they do so. If a study is cited by a large number of subsequent studies, that can be an indicator that the study is largely considered credible by other researchers. On the other hand, you may find that other authors have written rebuttals or performed counter-studies which contradict that initial study’s claims. In the latter case, you need to read these opposing perspectives to uncover potential flaws in the first study’s research. Whether or not the concerns raised are valid, you should at least be aware of them.

d) Evaluate the credibility of the journal (or other source) which published the study. Peer-reviewed scientific journals tend to be the most credible sources for well-conducted, critically evaluated research studies, and some of these journals are more highly esteemed and set higher standards for their published material than others. Other publications are not peer-reviewed, which sets the bar lower for the quality of the research that’s published. Research which appears on university websites but is not published in a scientific journal is also not necessarily subjected to peer-review, and has a lower level of quality control respectively. Studies published on websites or in newsletters hosted by their authors are subjected to virtually no quality control at all and may have numerous issues preventing them from being formally published in a peer-reviewed journal.

e) Take into account potential conflicts of interest. Sometimes authors will declare conflicts of interest within their studies, but not always. Research that is primarily for-profit (e.g. used for developing a health test that can be marketed to dog breeders) with no quality control like a third party organization evaluating the validity and accuracy of the research (again, there is no such means of quality control for DNA tests marketed to dog breeders) should be treated with a higher degree of skepticism accordingly.

f) Attempt to determine whether the data and results actually support the authors’ conclusion. When there’s a glaring flaw in the research design of a study, determining that the authors’ conclusion is not truly supported by their data is fairly straightforward. However, in some instances, though there may be no obvious problem with a study’s data, the authors may still make an unwarranted logical leap in their concluding statements which is factually unsupported by their results. This is a much more common issue in studies which are not peer-reviewed. For a specific example of this, see my write-up on the problems of the UC Davis Italian Greyhound Genetic Diversity Test.

4) If an author points out the flaws of a particular source or study but also uses some of the data from that source as supporting evidence, evaluate whether their purported flaws are accurate and if their utilization of data from that source is justified rather than contradictory. Building off the last point in the previous section, it is possible for a study to have a poorly supported conclusion but accurately collected data and properly analyzed results. In this case, it’s possible for an author to utilize the valid parts of that study while simultaneously condemning some of its other aspects.

This particular issue has been brought up with my usage of the data from the UC Davis genetic diversity research done to formulate their respective breed genetic diversity tests. In this case, the problem doesn’t lie with the method of data collection or analysis, but rather the conclusions of the researchers. This is further complicated by the fact that this is for-profit research which is not peer-reviewed and has no outside means of quality control, and is ultimately conducted in order to appeal to a specific market of purebred dog breeders. Notably, the summaries of the research behind these tests do not cite any outside sources, and of course, the write-ups are done in such a way that attempts to enhance the purported benefits of the test whether or not these benefits have any existing supporting evidence. In the case of the IG diversity test discussed in detail in the link I previously provided, the researchers for the test not only failed to cite any supporting outside sources, but also neglected to disclose that there is in fact evidence from another research study which directly contradicts their concluding claims.

This is why I sometimes cite the data collected via the research for these tests while simultaneously rejecting the authors’ conclusions and labeling these tests as misguided attempts to appeal to the belief that “health testing is the way to save our breeds, so we need to create as many health tests as possible,” which is woefully common among purebred dog breeders.

5) Everyone who writes to prove a point has an agenda by definition; determining what this agenda is is far less important than determining whether this agenda is scientifically well-founded. Just like every author is bound to be biased in one way or another, every author who is writing a factual article intended to substantiate their own beliefs has an agenda [of making their point]. In most cases, these beliefs are a part of the bigger picture of the author’s worldview, which can also be considered an agenda. The fact that an author has an agenda, whether broad or specific, should not be as big an issue as whether the topic they’re writing about in a particular piece is addressed in a logically sound, factually based, scientifically founded manner.

This ties back into the first point of evaluating each individual argument on its own merits rather than choosing to blindly disregard them on the basis of distrust or contempt for the author. No one will agree with you on every opinion. Some will disagree with you far more than others. All of them are potentially valuable sources of information which can and should be evaluated accordingly.

Choosing to selectively read only the writing of authors you more or less agree with does nothing to expand your worldview or challenge your own beliefs. A mind that is never challenged will never grow. If your only reading consists of works which you know will support your existing beliefs, then you as a reader are being unfairly biased. This leads up to my final point.

6) Apply the same level of skepticism you use for evaluating others’ beliefs to evaluating your own beliefs. It’s great to have high standards for evaluating new arguments. It’s even better to have high standards for supporting your own.

If an article rubs you the wrong way, take a step back and consider why. Is it because their arguments are poorly constructed and have no supporting evidence? Or is it because it attacks your own core beliefs which are so essential to your worldview that any attempt to question them leaves you feeling uneasy? Does the writing only present a one-sided story? Or does it make you uncomfortable because you have put your faith in an opposing one-sided story? Is the author so fixated on their own personal agenda that it skews their message? Or is it youwho is so fixated on your own personal agenda that you cannot earnestly consider altering your own beliefs?

Know why you believe what you believe. Don’t just blindly accept these beliefs merely because, “Everyone I know believes it,” or, “We’ve always done it this way.” If you expect an author to ground their beliefs in factual evidence, it’s only fair to expect the same of yourself. If you expect an author to change their opinions based on new or improved pieces of evidence, then you should do that too. Educated discourse is a two way street; a closed mind is a roadblock, breaking down constructive communication between every party involved.

Question everything, always.