Examples of bad science

:rolleyes:

Don't you ever get tired of this silly paranoia? I know I'm beyond tired of your insistence on insulting me by claiming that science is a religion when you've been told many, many times that it is not.

Told by who? Someone who knows? Someone who thinks things are the way they say they are just because they say so? I've been told many, many times that there is a god. Heck, I've been told many, many times that there is a Santa Clause.

As to this being an insult to you...wait...did you just say I was paranoid?

As far as I'm concerned, it's your continued insulting behavior that is the affront. There are some things I think/believe that some people here would take exception to, either because of the subject matter or how I feel about it, but since this is one of my "internet homes," I act like a guest and don't post them.

Really? I'd be fascinated to see what you consider not "guest worthy."

Yes, there is a plethora of bad science. There are mistakes that are accidental, results that have been falsified, and pointless "experiments" that go on year after year - not because they need to, but because of the grant $$$$$$$ - and as a consequence, innocent animals suffer.

Thanks for the agreement. Why you felt like you had to provide such a preface is quite the curiosity. I'd hate to think what kind of blast I'd have had to wade through to get to your opinion if you had not agreed.

That doesn't mean science itself is bad, just that there are some people who are either incompetent, without ethics, or both.

I'm pretty sure I said that, or at least strongly implied it, but that's a good clear statement. Thanks.
 
The first thing I thought of was Ben Goldacre's criticism of the pharmaceutical industry, but there have been a number of scandals involving people trying to circumvent or manipulate the peer review process:

http://www.nature.com/news/publishing-the-peer-review-scam-1.16400

http://www.washingtonpost.com/news/morning-mix/wp/2015/03/27/fabricated-peer-reviews-prompt-scientific-journal-to-retract-43-papers-systematic-scheme-may-affect-other-journals/

There's also a lot of bad science in various fields of lobbying: the tobacco industry and climate change denial being prime examples. Funny how money so often appears to be the motivation.

As well as these i'd lump a lot of pseudo science under a broad 'had science' label. Most people who claim their investigations support homeopathy, auras, psychic phenomena of all kinds, chiropractice, divining etc etc. are using a scientific method of sorts - conducting repeatable experiments, forming and testing hypotheses and so on. But the veneer of respectability falls away quickly jnder inspection. With homeopathy for example the basis is the principle that 'like cures like'. A sophisticated body of theory and practice is erected over that edifice, but there is actually no evidence whatsoever that the basic premise is true.
 
I disagree that the first example is bad science. If you have a signal that you do not understand, you need to investigate it. Most of the time it turns out to be a stupid error, but if you preemptively attribute everything to stupid errors, you will never find anything surprising and make a big discovery.

But I agree, that there is such a thing as bad science. The first thing that comes to mind is misanalyzing data an drawing conclusions that the data does not support. If that is not caught during the review process, the conclusion enters the literature and might go unchallenged for a long time. Any further science based on that conclusions is likely going to waste.

And sometimes it it the science culture in a field that leads to this type of errors on a large scale. There seems to be a replication crisis going on in medicine and psychology, because it has been noticed that a large fraction of studies cannot be replicated. The journal Science just published a meta-analysis of an effort to replicate 100 published studies with the result that less than 40% could be replicated.

The second thing I can think of is overselling. In the modern science system, it is unfortunately necessary to sell your findings. That is, every time you publish something, you need to explain why your findings are interesting and what major problem you hope they might be solving in the future. To be successful you need to stretch that to the limit of what you can reasonably claim. But there are some people who regularly go over that limit and claim things that are not supported by the data at all.

Then there is fraudulent science, but that deserves a category of its own, although the line between bad science and fraudulent science is a bit fuzzy.
 
Don't you ever get tired of this silly paranoia? I know I'm beyond tired of your insistence on insulting me by claiming that science is a religion when you've been told many, many times that it is not.
Nope. Clearly not. And he now even claims to be a scientist, none of whom so readily confuse the two matters, at least publicly, without losing all credibility as a real scientist.
 
Yeah, what uppi said, the incentive structure in science is a large factor in producing bad science.
 
But I agree, that there is such a thing as bad science. The first thing that comes to mind is misanalyzing data an drawing conclusions that the data does not support. If that is not caught during the review process, the conclusion enters the literature and might go unchallenged for a long time. Any further science based on that conclusions is likely going to waste.

And sometimes it it the science culture in a field that leads to this type of errors on a large scale. There seems to be a replication crisis going on in medicine and psychology, because it has been noticed that a large fraction of studies cannot be replicated. The journal Science just published a meta-analysis of an effort to replicate 100 published studies with the result that less than 40% could be replicated.
Then that isn't science at all. It is pseudo-science masquerading as science, much of which we continue to see in this forum.

The same is true for the so-called "studies" conducted by the tobacco industry.

But it is much more difficult to get away with this sort of nonsense in the hard sciences, which is large part of the reason why they don't have a very good reputation at all in that regard.
 
If you are going to use a definition of the word "bad" that is so broad that it is synonymous with "unproductive" and "poorly executed", then there is bad science.

I would argue that your definition of the word bad makes it useless because it is so non-specific. Bad is already a word that has alot of different meanings and is so loosely defined as to be pretty useless to begin with, and your usage isn't making it any better.
 
Then that isn't science at all. It is pseudo-science masquerading as science, much of which we continue to see in this forum.

The same is true for the so-called "studies" conducted by the tobacco industry.

But it is much more difficult to get away with this sort of nonsense in the hard sciences, which is large part of the reason why they don't have a very good reputation at all in that regard.

Designating whole branches of science as pseudo-science goes a bit far.

And it is not all nonsense. As dutchfire said, a lot of this is due to wrong incentives set from outside of science. For example, how could you even properly account for publication bias? Where a paper gets published is often not the scientists' choice.
 
retractionwatch.com

Go nuts.
 
Designating whole branches of science as pseudo-science goes a bit far.
Then I'm glad I didn't actually do anything of the sort.

But I think not even validating the experimental data and analysis by independent parties can hardly be called "science". Therein lies the real problem. Science has likely been hijacked by unscrupulous people to further their own agenda.

That is why I prefer to call it "pseudo-science" instead of "bad science". The former makes it clear that science is being deliberately perverted. The latter makes it sound like it is an inherent weakness in the scientific method. That all science can be considered to be suspect because it may turn out to be considered as "bad science" in the future merely because future research modifies the results to some extent.

And it is not all nonsense. As dutchfire said, a lot of this is due to wrong incentives set from outside of science. For example, how could you even properly account for publication bias? Where a paper gets published is often not the scientists' choice.
Again, I didn't claim it was "all nonsense".

But I think you can agree that the lax standards used in some fields by some not-so-reputable publishers is also a major part of this problem.

As you pointed out, you see it a lot in the medical publications because there are millions, and perhaps even billions, riding on the outcome of some of the studies.

Also, the fossil fuel industry has clearly injected substantial sums into supposed AGW research.

Here is a supposedly "peer-reviewed study" by two members of university business schools whose topic is "Science or Science Fiction? Professionals’ Discursive Construction of Climate Change".

With all of the hysteria, all of the fear, all of the phony science, could it be that man-made global warming is the greatest hoax ever perpetrated on the American people? (Inhofe, 2003)

Would you call supposed "science" conducted by two business school members to be "bad science" or "pseudo-science" in a study which attempted to do nothing but classify opinions regarding AGW into a few predefined buckets? Why are "professional engineers" even asked for their opinions in this matter since it is clearly completely outside their respective fields?
 
I'm actually going to agree with Tim on the first example. They so strongly believed in life in outer space that they didn't do due diligence on the source. The fact that there hasn't found a radio source outside of quasars shows they had an a priori view that aliens exists and when they got a signal it led them to have confirmation bias about the result. That is bad science.

Another example of bad science would be Lysenkoism. It's claims never held up and was unfortunatley the foundation of much of Soviet science.
 
Belief in aliens and Dialectical materialism are not "science". You might as well try to include social Darwinism and phrenology.
 
There's plenty of bad science.
Pseudo sciences clearly fall under it (IMHO), because they don't properly execute the scientific process.
But also in real sciences you often read things where methodology is not optimal (not bad), where data is not good (happens), and conclusions are blown up (maybe a bit bad). But the problems are when the methodology is totally under the standard, the hypothese (what's the plural?) are not properly tested, if there are no proper controls, etc., etc. Totally awful examples don't happen that often, but with a few things in it...too often.

The science wasn't corrupted, it was just wasteful and only done because someone would pay to get it done. In years of research there was no advancement of useful knowledge accomplished beyond the first week.

I'm irritated at the the chemists and biologists who didn't stand up to their administrators and say "I don't care if it is easy to fund, this research is pointless and i have better things to do with my time," because everyone involved in this research knew two things:

You know how that really goes?
At the end it might be a side project of one person, which just takes a long time to publish. In the meantime 5 really important research projects are funded from the money, just that it's nowhere officially registered.
I know from one department, where at least 4 people are getting paid by a big EU project, but from whom only 1 person does actually work for it....sometimes.
So yeah, the resarch might be a waste, but not necessarily the money which has been allocated for it ^^.
 
I disagree that the first example is bad science. If you have a signal that you do not understand, you need to investigate it. Most of the time it turns out to be a stupid error, but if you preemptively attribute everything to stupid errors, you will never find anything surprising and make a big discovery.

But I agree, that there is such a thing as bad science. The first thing that comes to mind is misanalyzing data an drawing conclusions that the data does not support. If that is not caught during the review process, the conclusion enters the literature and might go unchallenged for a long time. Any further science based on that conclusions is likely going to waste.

And sometimes it it the science culture in a field that leads to this type of errors on a large scale. There seems to be a replication crisis going on in medicine and psychology, because it has been noticed that a large fraction of studies cannot be replicated. The journal Science just published a meta-analysis of an effort to replicate 100 published studies with the result that less than 40% could be replicated.

The second thing I can think of is overselling. In the modern science system, it is unfortunately necessary to sell your findings. That is, every time you publish something, you need to explain why your findings are interesting and what major problem you hope they might be solving in the future. To be successful you need to stretch that to the limit of what you can reasonably claim. But there are some people who regularly go over that limit and claim things that are not supported by the data at all.

Then there is fraudulent science, but that deserves a category of its own, although the line between bad science and fraudulent science is a bit fuzzy.


I agree that a signal has to be investigated. I think the "bad science" comes in when a large number of highly trained people responsible for a hugely expensive piece of investigative equipment manage to waylay themselves and the equipment by not immediately examining the more obvious possibilities and jumping on the most wild possibility. Anyone operating a radio telescope that doesn't look at an anomalous signal and say "Where are the high energy signal generators in the immediate vicinity and what are they doing?" first scores no points in my class.
 
Formy...I notice that unlike Valka, who had to vent on me before giving an opinion, you don't bother with the question being addressed at all. Apparently you just want to pick a fight with me, or Uppi, or whoever else catches your attention at the moment. Do us all a favor, get out from behind your keyboard and take your fight picking out into the physical universe.
 
If you are going to use a definition of the word "bad" that is so broad that it is synonymous with "unproductive" and "poorly executed", then there is bad science.

I would argue that your definition of the word bad makes it useless because it is so non-specific. Bad is already a word that has alot of different meanings and is so loosely defined as to be pretty useless to begin with, and your usage isn't making it any better.

An acceptable criticism. I agree that the word "bad" has so many definitions, which overlap in varying degrees, that it is pretty useless. Since my two usages are well within existing definition I am not going to take the blame for it being a uselessly ambiguous word. That happened long before I came along.

As stated, my attention was drawn initially when someone responded to "there's good science and bad science" with a diatribe that amounted to "science is not EVIL!!!!" They denied all ambiguity in the word, picked a definition that made the original statement "wrong," and launched corrective nukes.

So this thread is, in many respects, specifically about the ambiguities of the word "bad." But I did get very specific about what definitions I thought merited exploration.
 
But also in real sciences you often read things where methodology is not optimal (not bad), where data is not good (happens), and conclusions are blown up (maybe a bit bad). But the problems are when the methodology is totally under the standard, the hypothese (what's the plural?) are not properly tested, if there are no proper controls, etc., etc. Totally awful examples don't happen that often, but with a few things in it...too often.

Nope. Repeated testing and statistical testing of the results is always used to rule out bogey results. If the results from one repeat are found to be different to 99 others, it wont make the slightest dent in the overall peer reviewed conclusion.

That's also a very strong reason why 'one reference is never adequate' on undergrad degrees. The main science journals don't just publish random whacko lunatic results and conclusions if they are shown to be far from the norm, unless a statistical majority of repeats produce the same random whacko lunatic results, in which case they are no longer random whacko lunatic results.
 
You know how that really goes?
At the end it might be a side project of one person, which just takes a long time to publish. In the meantime 5 really important research projects are funded from the money, just that it's nowhere officially registered.
I know from one department, where at least 4 people are getting paid by a big EU project, but from whom only 1 person does actually work for it....sometimes.
So yeah, the resarch might be a waste, but not necessarily the money which has been allocated for it ^^.

Sometimes. In that particular case the supplier of funds was trying to get backing for banning the product before it came into widespread use. There were inherent performance demands that produced a lot of very fast (and often poorly detailed) reports. The kind of reports that get "reviewed" by lawyers and politicians rather than peers. These were major chemical companies that everyone wanted to "get in good with" in order to get a foot in the door for future projects.

You do make the good point that the money can be put to good use even if it is designated for something stupid...and I cannot say for sure that that did not happen to some extent. It is also possible that by "getting in good" with these chemical companies by giving them a good effort on a bad project the rewards of funding for good science flowed more freely down the line.

The other good point that has been made to me in the past is that it is possible that advancements in the techniques of observation of bacteria had to be made to get the levels of precision that eventually were achieved. Even if the specific thing being measured was useless, such techniques may have then been used in other research down the line. I cannot say one way or the other whether any such advancements were required, so I conceded that point as well.

Conclusion: some good can come of even "bad" science.
 
Politicians deciding to fire scientists who tell everyone the science doesn't agree with the party line/common sense/public opinion probably counts too.
 
Then I'm glad I didn't actually do anything of the sort.

But I think not even validating the experimental data and analysis by independent parties can hardly be called "science". Therein lies the real problem. Science has likely been hijacked by unscrupulous people to further their own agenda.

[...]

Again, I didn't claim it was "all nonsense".

But I think you can agree that the lax standards used in some fields by some not-so-reputable publishers is also a major part of this problem.

As you pointed out, you see it a lot in the medical publications because there are millions, and perhaps even billions, riding on the outcome of some of the studies.

A full validation of the experiment and the analysis is almost never done, even in the hard sciences. The rare exceptions are usually in case of an accusation of fraud. In all other cases, nobody has the time to validate everything. Peer review will (hopefully) catch glaring errors, but it is no guarantee that the result is correct.

It is not a problem of not-so-reputable journals. All of the studies that failed to replicate in that meta-analysis were peer-reviewed and published in reputable journals and were up to the standards in the field.

Hijacking by unscrupulous people can be a problem, but there are intrinsic problems in regular science, where nobody wants to do bad science, but ends up doing so, because doing it right is not encouraged.

Also, the fossil fuel industry has clearly injected substantial sums into supposed AGW research.

Here is a supposedly "peer-reviewed study" by two members of university business schools whose topic is "Science or Science Fiction? Professionals’ Discursive Construction of Climate Change".

Would you call supposed "science" conducted by two business school members to be "bad science" or "pseudo-science" in a study which attempted to do nothing but classify opinions regarding AGW into a few predefined buckets? Why are "professional engineers" even asked for their opinions in this matter since it is clearly completely outside their respective fields?

Certainly pseudo-science. But pseudo-science deserves its own category, apart from bad science.

Nope. Repeated testing and statistical testing of the results is always used to rule out bogey results. If the results from one repeat are found to be different to 99 others, it wont make the slightest dent in the overall peer reviewed conclusion.

That's also a very strong reason why 'one reference is never adequate' on undergrad degrees. The main science journals don't just publish random whacko lunatic results and conclusions if they are shown to be far from the norm, unless a statistical majority of repeats produce the same random whacko lunatic results, in which case they are no longer random whacko lunatic results.

'One reference is never adequate' might work on undergraduate work. But if you are on the edge of human knowledge, where science is supposed to be, there might be only one reference, maybe even because currently no one else is able to do something similar. So what do you do? You might devote your efforts to create the second reference, but no one will thank you for that and you get neither publications nor funds. You can only do something related, but different in the hope that the reference is correct and someone will find out in the future if it is not.

The problem are not so much the whacko lunatic results. In those cases the rule 'extraordinary claims require extraordinary evidence' tends to apply and your data needs to be very convincing to pass peer review. The problem are the results which seem plausible and there is no intrinsic reason to doubt them, but there are no resources available to fully validate them beyond any reasonable doubt.

If it is anything important, an erroneous result will be corrected over time, because other experiments might require a comparison against the old result, and if the baseline is not what it should be you need to investigate that. But that will take time and in the meantime the wrong result stands.
 
Back
Top Bottom