Examples of bad science

You repeat the test yourself a few times first to check its accuracy. Also in most such cases, discoveries are made by a team, not an individual, so they can also repeat it as well.
 
'One reference is never adequate' might work on undergraduate work. But if you are on the edge of human knowledge, where science is supposed to be, there might be only one reference, maybe even because currently no one else is able to do something similar. So what do you do?

Take the science believer alternative, and go use something no less established than gravity as your example. Avoid actual science at all costs and dogmatically "explore" only the dustiest parts of the library.
 
Who will fund a study that's just trying to replicate a result, which high impact journal is going to publish it, who is going to hire the guy/girl that had replicating known results as the main focus of the PhD project?
Follow the incentives.

You repeat the test yourself a few times first to check its accuracy.

And detect the microwave cooking your lunch every time...
 
You repeat the test yourself a few times first to check its accuracy. Also in most such cases, discoveries are made by a team, not an individual, so they can also repeat it as well.

Actual science is a bit different from an undergrad lab project that I could complete in one morning.

Often you have to be happy with the data that you could get. And there are other scientists who will publish before you with worse data if you take to long. As dutchfire said: Follow the incentives.

And it is not like you can get rid of systematic errors by repeating the same experiment, anyway.
 
Bad science :

Something doesn't work right away = it doesn't work
Something doesn't kill you right away = its safe
'They' did a (one) study on it = We have a concrete answer
The FDA said its ok = lol
"Well if it were harmful 'they' wouldn't allow it"
"Well if it were safe 'they' would allow it"
"There's no corruption in academia/science, its separate from/purer than governmental/corporate influence"

The idea that scientists are researchers are objective & lack ego is insane. Imagine you had a theory that was believed to be true for decades & then some whippersnapper published a paper disproving your theory, the ideal scientist would take it in stride & try to replicate the findings to be sure he was indeed wrong but not everyone is so noble. When there are millions/billions of dollars on the line for maintaining the status quo being noble may not even cross the mind.
 
Modern cosmology with all these Big Bang theories, expanding universes, dark enigmas, etc is really bad science, dogmatic, careerist, full of marketing, it evolved to some kind of scientific religion with a creation myth of its own losing practical, experimental side, the scientific method; needs to be shaved with Occam's razor. Together with some outlets of particle/quantum physics it encroaches the sacred: causality.
 
Modern cosmology with all these Big Bang theories, expanding universes, dark enigmas, etc is really bad science

Nope. These are all examples of hypothesis, plus the universe is proven to be expanding.
 
Modern cosmology with all these Big Bang theories, expanding universes, dark enigmas, etc is really bad science, dogmatic, careerist, full of marketing, it evolved to some kind of scientific religion with a creation myth of its own losing practical, experimental side, the scientific method; needs to be shaved with Occam's razor. Together with some outlets of particle/quantum physics it encroaches the sacred: causality.
Quantum entanglement is an experimental reality explained mathematically by quantum mechanics. Problems come when we try to adjust such things to our perception, which leads to the many different interpretations of quantum mechanics, some leading to multiverse, some leading to rupture of locallity and causality. All these interpreations are more philosophical than scientific, so i would not call it bad science but plain curiosity and the need we human have to understand the universe which after all is he engine behind any science.
 
There's certainly such a thing as bad science, though I'm not sure the OP's examples are all particularly clear cases of it. The whole chlorinated water saga may have been for stupid reasons, but sounds like the science itself was done fine.

Number one problem in science is overstating what can be concluded from your results. Given that your funding and career prospects depend on publishing papers with big conclusions that will attract lots of citations, there's a huge incentive to do this. In fact I don't think I've ever seen a paper at the draft stage which didn't have to have its conclusions toned down a bit before it could be published. This is one of the easier things for peer review to catch.

A slightly less common case is presenting a conclusion which simply doesn't follow logically from the results you've collected and shown. That definitely is well in the realm of "bad science". An example from a recent conference I was at:

Person is attempting to decipher something about how birds remember locations. Comes up with hypothesis about how memory works. Devises test - if memory works in hypothesised way birds will do A. If not birds will do B. So far so good. In practice when they observe 10 birds, 5 do A, and 5 do B. They then triumphantly conclude their hypothesis is correct, and include an afterthought hand wavy explanation why birds might still do B sometimes if their hypothesis is true.

Anyone with a grasp of the scientific method will be face-palming at this point. Even more so when I tell you that those aren't hypothetical numbers, that was the real sample size. Peer review should in theory catch these things - but it doesn't always. Studies with these kinds of problems seem to be particularly prone to cropping up as filler in the science section of mainstream news sites.

bhavv said:
Nope. Repeated testing and statistical testing of the results is always used to rule out bogey results. If the results from one repeat are found to be different to 99 others, it wont make the slightest dent in the overall peer reviewed conclusion.

That's also a very strong reason why 'one reference is never adequate' on undergrad degrees. The main science journals don't just publish random whacko lunatic results and conclusions if they are shown to be far from the norm, unless a statistical majority of repeats produce the same random whacko lunatic results, in which case they are no longer random whacko lunatic results.

I can't remember the last time I came across a result new enough to be relevant to my research that had been independently repeated 100 times. Or 10 times for that matter. Journals are only interested in publishing research that is new. If you have a result that isn't obviously absurd, and plausible data to back it up, some journal will publish it. They're not going to sit on it for years while it's replicated enough times to create a big enough sample size of studies for meaningful statistical analysis. Once the first study is published, there is precisely zero incentive for anyone else to do exactly the same study. It won't get funded, it will be hard to get published unless you can find a new take on it, and it will get very few citations even if you do get it in print. At best you get a few papers from the research groups who got scooped by the first lot which will corroborate it (and they'll be trying very hard to find some way to draw a distinct conclusion from the first paper!)

At undergraduate level you may have many references for a claim. In the world of research you'll probably have one. Two on a good day. Maybe even two that agree with each other on a really good day. Now eventually of course if a claim is interesting someone will usually come up with an idea that's based on it, and will necessarily have to duplicate the results as a starting point. Suppose I do that and can't duplicate it, which is all too common. Have I screwed up, or was it the original publishers? Would require a load more time and resources to run the experiments to be sure, and if I am right then I'm heading down a dead end. Better to cut my losses and find a new project. Eventually dodgy results will get caught as people repeatedly fail to duplicate them, but for this reason it can take quite a long time.

bhavv said:
You repeat the test yourself a few times first to check its accuracy. Also in most such cases, discoveries are made by a team, not an individual, so they can also repeat it as well.

Repeating the experiment yourself is great way of repeating the same errors and getting a consistently wrong result. You won't catch a dodgy instrument or a contaminated reagent that way.

While researchers are generally part of a group, it is not standard practice to have group members duplicating each other's experiments. They'll be working on different aspects of the project. Unless someone's results are obviously irreconcilable with the rest the incentive is to publish before another group does rather than spending more time and resources on a result that's already in the bag. If a group leader were to ask someone to simply duplicate another group member's results quite a few people I've worked with would take that as an accusation of fraud - or at best abject incompetence. You can make an argument that they shouldn't react that way, but in reality this only tends to get done if those kind of suspicions are growing.

There is plenty of published science that has been done rigorously, stands up to later replication, and can be the foundation for new ideas. I'd like to think mine is some of it. But don't kid yourself that there's no such thing as "bad science" or that everything you read in even the best peer reviewed journals is automatically correct. Especially if it's new and making an impressive claim.
 
There's certainly such a thing as bad science, though I'm not sure the OP's examples are all particularly clear cases of it. The whole chlorinated water saga may have been for stupid reasons, but sounds like the science itself was done fine.

You certainly supported the one "kind" of bad science, so thanks. The chlorine example was to illustrate a different kind, which you sort of acknowledge.

It was done not for stupid reasons, but to provide a result that could be used to falsely support a predrawn false conclusion. The science may have been done correctly. In fact to the best of my knowledge it was done correctly. It could even be appreciated, in a vacuum, as good work on the lifespan of bacteria under varying conditions. But it wasn't done in a vacuum, it was done with intent to defraud, or at least with tacit support of the intent to defraud.
 
Timsup2nothin said:
It was done not for stupid reasons, but to provide a result that could be used to falsely support a predrawn false conclusion. The science may have been done correctly. In fact to the best of my knowledge it was done correctly. It could even be appreciated, in a vacuum, as good work on the lifespan of bacteria under varying conditions. But it wasn't done in a vacuum, it was done with intent to defraud, or at least with tacit support of the intent to defraud.

"Defraud" isn't entirely fair. If the scientists are only reporting their results accurately (and that situation is rather a recipe for bad science of the confirmation bias flavour) there's no problem in terms of getting incorrect information getting into the scientific literature. So long as their conclusion clearly stated that in neither case is the bacterial survival period long enough to be a health hazard I don't really see an objection from an integrity point of view. And if you never publish anything where someone else might lie about what you said, you'll never publish anything at all. I wouldn't file this under "bad science". Maybe in the extremely large file "bad reporting of science by non-scientists"

By the way, I wouldn't get too worked up about the "wasteful" aspect. As The_J pointed out, this won't have diverted much in the way of resources. If this one got passed down to me, I'd be thinking that the group has a number of undergraduate project students, all of whom need something to do that will give a result in a matter of weeks anyway. Something where they can learn lab technique, where we have a pretty good idea what the result will be already, and someone else is providing funds that can be spent on more useful projects? Definitely would be a net gain.
 
"Defraud" isn't entirely fair. If the scientists are only reporting their results accurately (and that situation is rather a recipe for bad science of the confirmation bias flavour) there's no problem in terms of getting incorrect information getting into the scientific literature. So long as their conclusion clearly stated that in neither case is the bacterial survival period long enough to be a health hazard I don't really see an objection from an integrity point of view. And if you never publish anything where someone else might lie about what you said, you'll never publish anything at all. I wouldn't file this under "bad science". Maybe in the extremely large file "bad reporting of science by journalists etc."

By the way, I wouldn't get too worked up about the "wasteful" aspect. As The_J pointed out, this won't have diverted much in the way of resources. If this one got passed down to me, I'd be thinking that the group has a number of undergraduate project students, all of whom need something to do that will give a result in a matter of weeks anyway. Something where they can learn lab technique, where we have a pretty good idea what the result will be already, and someone else is providing funds that can be spent on more useful projects? Definitely would be a net gain.

Such statements were not made. The studies were maintained "pure," in that they measured only the effect on the lifespan of the bacteria, without any mention of any impact on health hazards. You get what you pay for, and the chemical companies paid for something they could use. Impact on health hazards (ie, none) was determined through further studies of how far bacteria could travel in these short spans of time, funded mostly at public expense when cities and counties were sued for "recklessly deactivating the necessary sanitation" in their park pools...with a product that by then had been in common usage for a decade.

Between the costs of funding those studies, and paying off any number of lawsuits based on "scientific proof" of such recklessness before those studies were done, and the fact that there are still publicly operated pools collectively spending millions of dollars annually on chlorine that they are basically just free releasing more than half of into the sky because they "can't take risks with public health" by using a product that is now ubiquitous and demonstrated to be safe a thousand times over, I can't even guess what the total bill for the results of this research comes to. I get that research is research and funding is always good, but do you think "science," as represented by the labs being funded, bears any responsibility here? Is this not a form of "bad science"?
 
Timsup2nothin said:
Such statements were not made. The studies were maintained "pure," in that they measured only the effect on the lifespan of the bacteria, without any mention of any impact on health hazards. You get what you pay for, and the chemical companies paid for something they could use. Impact on health hazards (ie, none) was determined through further studies of how far bacteria could travel in these short spans of time, funded mostly at public expense when cities and counties were sued for "recklessly deactivating the necessary sanitation" in their park pools...with a product that by then had been in common usage for a decade.

I thought I'd better do a little fact checking on this to see what was actually said in these papers, so I had a quick look through the literature. After all, if there's such a slew of redundant studies, shouldn't be too hard to find them! There are a handful of papers in the databases on this subject from the 70s and early 80s. Black et al. 1970, and Canelli's 1974 review seem to be the closest to what you've described.

When you go through their results it's conspicuous that nowhere are they measuring the life span of e. coli (or any other pathogen) to ridiculous precision as you've complained about. What some of them are doing is measuring the time and conditions required to eliminate 99.999% of bacteria. Which is instantly ringing alarm bells for where that "ridiculous number of decimal places" idea came from. That's still actually leaving a fair number of bacteria alive. There's no millisecond precision in any of them. Typically they're rounding to the nearest minute. Other papers are straight up empirical studies of the numbers of live bacteria in pool water using various disinfection systems. Almost all detected some, but there are orders of magnitude of difference in the populations between different systems. You'll note that unless these studies are outright falsifications (not merely skewed or incomplete), this tells us the idea that all the bacteria are dropping dead mere inches from the source regardless of the disinfection system used is utter nonsense.

I get that research is research and funding is always good, but do you think "science," as represented by the labs being funded, bears any responsibility here? Is this not a form of "bad science"?

Having now looked at the studies in question rather than simply taking your description of them as fact, no they're not "bad science". Someone certainly seems to have done a good job of convincing you that they're bad though.

If you wish to debate whether the hypothetical situation you've outlined would be bad science - then as I said the scientists ought to report the relevance (or lack of) of the results to the health issues. Then they've met their responsibilities. This hypothetical situation does not however seem to bear any resemblance to reality.
 
Fair enough. I am very familiar with the reality side of the situation and not familiar enough with the science side, so how they connect other than in court I can't really say, and apparently shouldn't have. I do know that studies that you apparently have access to were used to paint the product as dangerous, by companies who funded the studies and benefited financially from the results. Those results either do not actually support the position taken by those companies, or they are wrong, because half a century of real world experience demonstrates that use of the product is not a health risk.

Since I tend to believe that scientist produce results that are true measurements that their results were just misused, not inaccurate. But I also think that in order to be readily misused certain connections have to be ignored, either intentionally or through failure to consider context.
 
But I also think that in order to be readily misused certain connections have to be ignored, either intentionally or through failure to consider context.

There is two problems with scientists providing context:
1) The scientist doing the measurement might not be qualified to accurately provide that context. If you know how long bacteria live in a pool, you do not immediately know how dangerous that is. You would need a model how many bacteria the average unshowered pool visitor drags in, how far they travel and how many bacteria are needed on average to cause a serious infection. You need data to create these models and if the data does not exist in usable form, you need to take that data, which in turn requires funding (which probably would not come from the companies). And even if you have calculated the increase of risk, who decides whether it is acceptable? 0.01% of risk per visit is still a significant number if you consider the number of pool visitors. It is totally harmless is not a statement anyone would like to make without having really good data to back it up.

2) The incentives are all wrong for that. As I said, these days you have to sell you paper. And "we investigated x, although x is very insignificant" is a very bad selling strategy and you might be out of a job soon. Society rewards inventing a bit of danger and does not reward an honest discussion of the relevance of your work.

Or in other words: If you want to see good science, you have to reward good science instead of just relying on the integrity of the scientists.
 
I wonder that morality is only of minor importance in this discussion. Science done by violating human rights is in any way bad science even if it's methodology should be really good. Mengele and Rascher are just a few examples from the past but the field of genetics/reproductive medicine provide a wide field of possible experiments which can be considered as "bad science".
 
I wonder that morality is only of minor importance in this discussion. Science done by violating human rights is in any way bad science even if it's methodology should be really good. Mengele and Rascher are just a few examples from the past but the field of genetics/reproductive medicine provide a wide field of possible experiments which can be considered as "bad science".

That's different than the two forms I brought up, but I agree, that is bad science. I doubt that anyone would suggest human testing on unwilling subjects could be called anything else.
 
There is two problems with scientists providing context:
1) The scientist doing the measurement might not be qualified to accurately provide that context. If you know how long bacteria live in a pool, you do not immediately know how dangerous that is. You would need a model how many bacteria the average unshowered pool visitor drags in, how far they travel and how many bacteria are needed on average to cause a serious infection. You need data to create these models and if the data does not exist in usable form, you need to take that data, which in turn requires funding (which probably would not come from the companies). And even if you have calculated the increase of risk, who decides whether it is acceptable? 0.01% of risk per visit is still a significant number if you consider the number of pool visitors. It is totally harmless is not a statement anyone would like to make without having really good data to back it up.

2) The incentives are all wrong for that. As I said, these days you have to sell you paper. And "we investigated x, although x is very insignificant" is a very bad selling strategy and you might be out of a job soon. Society rewards inventing a bit of danger and does not reward an honest discussion of the relevance of your work.

Or in other words: If you want to see good science, you have to reward good science instead of just relying on the integrity of the scientists.

Measuring the amount of bacteria in water might be something you can do with a month of PhD time and the material that's available in your lab anyway. Measuring the impact of the bacteria on someone's health requires going past a medical-ethical board.

A large part of doing science is finding interesting questions that have not been answered, yet that you can answer within time and money constraints.
 
That's different than the two forms I brought up, but I agree, that is bad science. I doubt that anyone would suggest human testing on unwilling subjects could be called anything else.
Why are humans so special? What about torturing animals in the name of science or blowing up a few hundred square miles or so?
 
Top Bottom