There's certainly such a thing as bad science, though I'm not sure the OP's examples are all particularly clear cases of it. The whole chlorinated water saga may have been for stupid reasons, but sounds like the science itself was done fine.
Number one problem in science is overstating what can be concluded from your results. Given that your funding and career prospects depend on publishing papers with big conclusions that will attract lots of citations, there's a huge incentive to do this. In fact I don't think I've ever seen a paper at the draft stage which
didn't have to have its conclusions toned down a bit before it could be published. This is one of the easier things for peer review to catch.
A slightly less common case is presenting a conclusion which simply doesn't follow logically from the results you've collected and shown. That definitely is well in the realm of "bad science". An example from a recent conference I was at:
Person is attempting to decipher something about how birds remember locations. Comes up with hypothesis about how memory works. Devises test - if memory works in hypothesised way birds will do A. If not birds will do B. So far so good. In practice when they observe 10 birds, 5 do A, and 5 do B. They then triumphantly conclude their hypothesis is correct, and include an afterthought hand wavy explanation why birds might still do B sometimes if their hypothesis is true.
Anyone with a grasp of the scientific method will be face-palming at this point. Even more so when I tell you that those aren't hypothetical numbers, that was the real sample size. Peer review
should in theory catch these things - but it doesn't always. Studies with these kinds of problems seem to be particularly prone to cropping up as filler in the science section of mainstream news sites.
bhavv said:
Nope. Repeated testing and statistical testing of the results is always used to rule out bogey results. If the results from one repeat are found to be different to 99 others, it wont make the slightest dent in the overall peer reviewed conclusion.
That's also a very strong reason why 'one reference is never adequate' on undergrad degrees. The main science journals don't just publish random whacko lunatic results and conclusions if they are shown to be far from the norm, unless a statistical majority of repeats produce the same random whacko lunatic results, in which case they are no longer random whacko lunatic results.
I can't remember the last time I came across a result new enough to be relevant to my research that had been independently repeated 100 times. Or 10 times for that matter. Journals are only interested in publishing research that is new. If you have a result that isn't obviously absurd, and plausible data to back it up, some journal will publish it. They're not going to sit on it for years while it's replicated enough times to create a big enough sample size of studies for meaningful statistical analysis. Once the first study is published, there is precisely zero incentive for anyone else to do exactly the same study. It won't get funded, it will be hard to get published unless you can find a new take on it, and it will get very few citations even if you do get it in print. At best you get a few papers from the research groups who got scooped by the first lot which will corroborate it (and they'll be trying very hard to find some way to draw a distinct conclusion from the first paper!)
At undergraduate level you may have many references for a claim. In the world of research you'll probably have one. Two on a good day. Maybe even two that agree with each other on a really good day. Now eventually of course if a claim is interesting someone will usually come up with an idea that's based on it, and will necessarily have to duplicate the results as a starting point. Suppose I do that and can't duplicate it, which is all too common. Have I screwed up, or was it the original publishers? Would require a load more time and resources to run the experiments to be sure, and if I am right then I'm heading down a dead end. Better to cut my losses and find a new project. Eventually dodgy results will get caught as people repeatedly fail to duplicate them, but for this reason it can take quite a long time.
bhavv said:
You repeat the test yourself a few times first to check its accuracy. Also in most such cases, discoveries are made by a team, not an individual, so they can also repeat it as well.
Repeating the experiment yourself is great way of repeating the same errors and getting a consistently wrong result. You won't catch a dodgy instrument or a contaminated reagent that way.
While researchers are generally part of a group, it is not standard practice to have group members duplicating each other's experiments. They'll be working on different aspects of the project. Unless someone's results are obviously irreconcilable with the rest the incentive is to publish before another group does rather than spending more time and resources on a result that's already in the bag. If a group leader were to ask someone to simply duplicate another group member's results quite a few people I've worked with would take that as an accusation of fraud - or at best abject incompetence. You can make an argument that they shouldn't react that way, but in reality this only tends to get done if those kind of suspicions are growing.
There is plenty of published science that has been done rigorously, stands up to later replication, and can be the foundation for new ideas. I'd like to think mine is some of it. But don't kid yourself that there's no such thing as "bad science" or that everything you read in even the best peer reviewed journals is automatically correct. Especially if it's new and making an impressive claim.