We’ve all heard it somewhere before: “It’s all just a big conspiracy and those bloody scientists are just trying to protect their funding sources.”
Whether it’s about climate change, pharmacology, genetically modified organisms or down-to-earth environmentalism, people who don’t want to agree with a particular scientific finding often invoke the conspiracy argument.
There are three main reasons why conspiracies among scientists are impossible. First, most scientists are just not that organised, nor do they have the time to get together to plan such elaborate practical jokes on the public. We can barely keep our own shit together than try to construct a water-tight conspiracy. I’ve never met a scientist who would be capable of doing this, let alone who would want to.
But this doesn’t necessarily prove my claim that it is ‘impossible’. Most importantly, the idea that a conspiracy could form among scientists ignores one of the most fundamental components of scientific progress — dissension; and bloody hell, can we dissent! The scientific approach is one where successive lines of evidence testing hypotheses are eventually amassed into a concept, then perhaps a rule of thumb.
When I write ‘tested’, I am referring to the testing of an hypothesis. An hypothesis is not merely a belief – it is phenomenon that can be isolated and measured experimentally (or mensuratively in the case of so-called ‘natural’ experiments). For example, an hypothesis might be that there is no change in the biomass of a fish stock with a certain fishing rate. The way to test this is to measure the fish stock before fishing occurs, and then afterwards. It’s important here that both replicates (different populations of fishes) and controls (places that receive no fishing) are included in the experimental design; otherwise, confounding effects that might not have anything to do with fishing per se might lead us to the wrong conclusion. If we then find that fishing rate x leads to a measurable (i.e., within the error of the measurement itself) decline in the fish stock, we can reject the (null) hypothesis. This is the Popperian concept of falsifiability, but it’s a pretty basic description of hypothesis testing, and sort of ignores the multiple working-hypotheses framework. However, it gives you good idea of how it’s done. Stating that the noise of the boats causes the fish stock to decline is not an hypothesis in this case because it’s not easily testable (i.e., we would have to measure changes in noise with fishing rate, we would have to establish how noise affects fish physiology and subsequent survival probability, and we’d have be able to rule out all other effects, such as direct mortality of fishing itself).
So if many tests of the hypotheses come up with the same (general) conclusion, and the rule of thumb might eventually become a theory. A theory is not, as many non-scientists think, merely an untested model of how something works — it is instead a massive body of tested evidence. Some theories even make it to become the hallowed law, but that is very rare indeed. In the environmental sciences, one could argue that there is no such thing as a law. Well-informed non-scientists might understand, or at least appreciate, that process, but few people outside the sciences have even the remotest clue about what a real pack of bastards we can be to one another.
I’ve written before about the peer-review process that is the bane (and simultaneously, the saviour) of every scientist in the world, but it is useful to repeat here. Scientists write ‘papers’ (usually of the format: Introduction [including described hypotheses to be tested), Methods [how we tested them], Results [what we found], Discussion [the implication of the results], Supporting References [previously published papers]), then submit them to various peer-reviewed journals (collections of papers published by a scientific publishing company). Most of the time the paper is rejected after several of our ‘peers’ review it (hence, ‘peer review’). An outright rejection (i.e., go away and don’t bother us again) is usually accompanied by some caring and supportive words like “fail,” “flawed,” and “nonsense”. If we do manage to get a foot in the door and are permitted to revise the paper according the the reviewers’ suggestions and critiques, then the paper might eventually be ‘accepted’ for publication and ultimately published in the journal (now more often than not, in an online-only version.).
While we might get better at writing papers as we gain experience, we also target more and more difficult-to-crack journals as we age, such that the rate of rejection/major revision does not change much as we progresse through our career—we just become numb to the pain and soldier on. In other words, if there are any chinks in the armor of the evidence for any particular phenomenon, other scientists are the first to expose and exploit them. In fact, many scientists have built their entire careers out of destroying the work of others.
This point alone prevents scientific conspiracies from ever happening, because we could never guarantee to keep everyone quiet. There’d always be several scientists out there waiting to expose the flaws the conspirators. It is therefore scientifically implausible that scientists could conspire. By ‘expose’, I mean via careful and empirical demonstration of the flaws in other peer-reviewed papers, and not merely a statement to the effect that ‘it’s flawed’. In other words, the process itself is its own check of the integrity of the phenomena under investigation.
I’m not for a moment suggesting that errors don’t occur, but they are identified and improved over time such that knowledge progresses incrementally as better techniques are developed, more data become available, and more scientists test many different related hypothesis describing different angles of the problem. So yes, long-cherished paradigms can be eventually overturned, but these are instances of new insight and the evolution of knowledge, and not the product of exposed conspiracies. Science is a human invention, and humans are imperfect, so they will make mistakes. Science is therefore the pursuit of subjectivity reduction (not objectivity per se, because that is impossible). Eventually, the truth comes out via the scientific method.
The third and final reason that scientific conspiracies cannot happen is that we’re simply not paid enough — either in terms of our personal salaries or the money we receive as grants to fund our research. I know of no scientist who has ever become rich from doing her/his science. If scientists are paid lavishly by special-interest groups, then it’s fairly straight forward to determine if their scientific approach suffers as a consequence (it usually does, and their bias is exposed). No amount of special-interest funding can overwhelm the tried-and-tested scientific process.
The next time some pompous, ignorant git claims that scientists are just a bunch of conspirators covering their own arses, you can show them this post and tell him he’s full of shit.
CJA Bradshaw